This article is part of the special edition of Venturebeat, “The Cyber Resilience Playbook: navigating through the new era of threats.” Read more from this special number here.
Generative AI asks interesting security questions, and as companies move to the agent world, those safety problems increase.
When AI agents enter workflows, they must have access to sensitive data and documents to do their work they are a significant risk for many security-oriented companies.
“The increasing use of multi-agent systems will introduce new attack vectors and vulnerabilities that can be used if they are not properly protected from the start,” said Nicole Carignan, VP of Strategic Cyber Ai AT Darktrace. “But the effects and damage of those vulnerabilities can be even greater due to the increasing volume of connecting points and interfaces that systems have a multi-agent.”
Why AI agents form such a high security risk
AI agents – or autonomous AI who performs actions on behalf of users – have become extremely popular in recent months. Ideally, they can be connected to annoying workflows and can perform any task, from something like simple as finding information based on internal documents to making recommendations for human employees.
But they are an interesting problem for professionals in the field of company security: they must have access to data that they make effective, without accidentally being opened or shipping private information to others. With agents who performed more of the tasks that human employees used to do, the issue of accuracy and accountability comes to the game, which may make a headache for security and compliance teams.
Chris Betz, Ciso van AWSVenturebeat said that picking up (RAG) and agent use cases “are a fascinating and interesting perspective” in security.
“Organizations will have to think about what sharing in their organization looks like as standard, because an agent will find by looking for everything that will support his mission,” said Betz. “And if you exceed documents, you must think about the standard sharing policy in your organization.”
Security professionals must then ask whether agents should be considered digital employees or software. How much access should agents have? How should they be identified?
AI agent vulnerabilities
Gen AI has made many companies more aware of possible vulnerabilities, but agents can open them to even more problems.
“Attacks that we see today that influence one agent systems, such as data poisoning, fast injection or social engineering to influence the behavior of the agent, can all be vulnerabilities within a multi-agent system,” Carignan said.
Companies must pay attention to which agents have access to ensure that the data security remains strong.
Betz pointed out that many safety problems around access to human employees can extend to agents. That is why it comes down to ensuring that people have access to the right things and only the right things. ” He added that when it comes to agent workflows with multiple steps, “each of those phases is a chance” for hackers.
Give agents an identity
One answer could give specific access identities to agents.
A world where models reason about problems in the course of the days is “a world where we have to think more about recording the identity of the agent, as well as the identity of people responsible for that agent request everywhere in our organization” , said Jason Clinton, CISO of model provider Anthropic.
Identifying human employees is something that companies have been doing for a long time. They have specific jobs; They have an e -mail address that they use to sign with accounts and to be followed by IT managers; They have physical laptops with accounts that can be locked. They receive individual permission to access some data.
A variation of this type of employee access and identification can be used in agents.
Both Betz and Clinton believe that this process can cause Enterprise leaders to reconsider how they give users access to users. It could even lead organizations to revise their workflows.
“The use of an agentic workflow actually offers you the possibility to bind the use cases for each step on the road to the data it needs as part of the canvas, but only the data it needs,” said Betz.
He added that agent workflows “can help tackle some of those worries about exceeding”, because companies have to consider which data will be accessible to full actions. Clinton added that in a workflow that is designed around a specific series of operations, “there is no reason why step one should have access to the same data that step seven needs.”
The old -fashioned audit is not enough
Companies can also look for agentic platforms with which they can peek into how agents work. For example, Don Schuerman, CTO or Workflow Automation Provider Pegasaid that his company helps to guarantee agent protection by telling the user what the agent is doing.
“Our platform is already being used to check the work that people do, so we can also check every step that an agent takes,” Schuerman told Venturebeat.
Pega’s newest product, AgentxAllows human users to switch to a screen with the steps that an agent undertakes. Users can see where the timeline of the workflow is the agent and get an readout of the specific actions.
Audits, timelines and identification are not perfect solutions for the security problems presented by AI agents. But while companies explore and implement the potential of agents, more targeted answers can arise as the AI experiments continue.
Leave a Reply