Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. More information
Unlocking the potential of AI to realize greater efficiencies, cost savings and deeper customer insights requires a consistent balance between cybersecurity and governance.
The AI infrastructure must be designed to adapt to the changing directions of a business. Cybersecurity must protect revenue and governance must stay aligned with compliance, both internally and across the company.
Any company looking to scale AI securely must continually look for new ways to strengthen core infrastructure components. Just as importantly, cybersecurity, governance and compliance must share a common data platform that enables real-time insights.
“AI governance defines a structured approach to managing, monitoring and controlling the effective operation of a domain and the human-centered use and development of AI systems,” said Venky Yerrapotu, founder and CEO of 4CRisktold VentureBeat. “Packaged or integrated AI tools pose risks, including biases in the AI models, data privacy issues, and the potential for misuse.”
A robust AI infrastructure makes audits easier to automate, helps AI teams find roadblocks, and identifies key gaps in cybersecurity, governance, and compliance.
>>Don’t miss our special issue: Fit for Purpose: Tailoring AI Infrastructure.<
“Because there are currently few to no industry-endorsed governance or compliance frameworks to follow, organizations must implement the right guardrails to securely innovate with AI,” said Anand Oswal, SVP and GM of Network Security at Palo Alto Networkstold VentureBeat. “The alternative is too expensive, because adversaries are actively looking for the latest path of least resistance: AI.”
Defending against threats to AI infrastructure
While the goals of malicious attackers range from financial gain to… disrupt or destroy AI infrastructure of conflicting countriesall search to improve their craftsmanship. Malicious attackers, cybercrime gangs and national actors are all moving faster than even the most sophisticated enterprise or cybersecurity vendor.
“Regulation and AI are like a race between a mule and a Porsche,” said Etay Maor, chief safety strategist at Cato Networkstold VentureBeat. “There is no competition. Regulators are always playing catch-up when it comes to technology, but in the case of AI, this is especially true. But the point is: threat actors don’t play nice. They are not limited by regulations and are actively looking for ways to jailbreak the restrictions on new AI technology.”
Chinese, North Korean and Russian cybercriminals and state-sponsored groups are active focused on both the physical and AI infrastructure and using AI-generated malware to exploit vulnerabilities more efficiently and in ways that are often indecipherable to traditional cybersecurity mechanisms.
Security teams are still at risk of losing the AI war as well-funded cybercriminal organizations and nation states target the AI infrastructures of both countries and companies.
An effective security measure is model watermarking, which builds a unique identifier into AI models to detect unauthorized use or tampering. Furthermore, AI-powered anomaly detection tools are indispensable for real-time threat monitoring.
All of the companies VentureBeat spoke to on condition of anonymity are actively using red teaming techniques. For example, Anthropic proved the value of human-in-the-middle design to close security gaps in model testing.
“I think human-in-the-middle design is with us in the near future to provide contextual intelligence and human intuition to create a [large language model] LLM and to reduce the incidence of hallucinations,” said Itamar Sher, CEO of Seal securitytold VentureBeat.
Models are the risky threat surfaces of an AI infrastructure
Every model put into production is a new threat surface that an organization must protect. Gartner’s annual AI adoption questionnaire found that 73% of companies have deployed hundreds or thousands of models.
Malicious attackers exploit model weaknesses using a broad base of trading techniques. NIST’s Artificial Intelligence Risk Management Framework is an essential document for anyone building AI infrastructure, providing insights into the most common types of attacks, including data poisoning, evasion, and model theft.
AI security writes“AI models are often targeted via API queries to reverse engineer their functionality.”
Getting the AI infrastructure in order is also a moving target, CISOs warn. “Even if you’re not using AI in explicitly security-focused ways, you’re using AI in ways that matter to your ability to know and secure your environment,” said Merritt Baer, CISO at Rectold VentureBeat.
Put design-for-trust at the center of the AI infrastructure
Just as an operating system has specific design goals that strive for accountability, explainability, fairness, robustness, and transparency, so too does AI infrastructure.
Implicit throughout NIST framework is a design-for-trust roadmap, providing a practical, pragmatic definition to guide infrastructure architects. NIST emphasizes that validity and reliability are indispensable design goals, especially in AI infrastructure, to deliver reliable results and performance.
Source: NIST, January 2023, DOI: 10.6028/NIST.AI.100-1.
The critical role of governance in AI infrastructure
AI systems and models must be developed, deployed and maintained ethically, safely and responsibly. Governance should be designed to provide workflows, visibility, and real-time updates across algorithmic transparency, fairness, accountability, and privacy. The cornerstone of strong governance begins when models are continuously monitored, controlled, and aligned with societal values.
Governance frameworks must be integrated into the AI infrastructure from the earliest stages of development. “Governance by design” anchors these principles in the process.
“Implementing an ethical AI framework requires a focus on security, bias and data privacy aspects, not only during the solution design process, but also during testing and validation of all guardrails before deploying the solutions to end users.” WinWire CTO Vineet Arora told VentureBeat.
Designing AI infrastructures to reduce bias
Identifying and reducing bias in AI models is critical to delivering accurate, ethical results. Organizations must go a step further and take responsibility for how they monitor, control and improve their AI infrastructures to reduce and eliminate bias.
Organizations taking responsibility for their AI infrastructure rely on adversarial debiasing train models to minimize the relationship between protected characteristics (including race or gender) and outcomes, reducing the risk of discrimination. Another approach is to resample training data to ensure a balanced representation relevant to different industries.
“Embedding transparency and explainability into the design of AI systems allows organizations to better understand how decisions are made, enabling more effective detection and correction of biased results.” say NIST. By providing transparent insight into how AI models make decisions, organizations can better detect, correct and learn from biases.
How IBM manages AI governance
IBM’s AI Ethics Council oversees the company’s AI infrastructure and AI projects, ensuring they all remain ethically compliant with industry and internal standards. IBM initially set up a governance framework that includes what it calls “focal points,” mid-level managers with AI expertise, who review projects in development to ensure compliance with IBM’s Principles of Trust and Transparency.
IBM says this framework helps reduce and manage risk at the project level, reducing risks to AI infrastructures.
Christina Montgomery, IBM’s chief privacy and trust officer, say“Our AI Ethics Board plays a critical role in overseeing our internal AI governance process and creates reasonable internal guardrails to ensure we introduce technology to the world in a responsible and safe manner.”
Governance frameworks must be embedded in the AI infrastructure from the design phase. The concept of governance by design ensures that transparency, fairness and accountability are integral to the development and deployment of AI.
AI infrastructure must deliver explainable AI
Closing the gap between cybersecurity, compliance, and governance is accelerating across all AI infrastructure use cases. Research from VentureBeat revealed two trends: agentic AI and explainable AI. Organizations with an AI infrastructure want to make their platforms more flexible and adapt to get the best out of each platform.
Of the two, explainable AI is still emerging when it comes to providing insights to improve model transparency and resolve biases. “Just as we expect transparency and rationality in business decisions, AI systems must be able to provide clear explanations of how they reach their conclusions,” said Joe Burton, CEO of Reputationtold VentureBeat. “This promotes trust and ensures accountability and continuous improvement.”
Burton added: “By focusing on these governance pillars – data rights, regulatory compliance, access control and transparency – we can leverage the capabilities of AI to drive innovation and success, while maintaining the highest standards of integrity and upholding responsibility.”
Source link
Leave a Reply