Enterprises are worried about agentic AI security risks – Gartner says the answer is just adding more AI agents


With enterprises ramping up the use of AI agents, new research suggests many might turn to the technology itself to establish guardrails.

Analysis from Gartner shows ‘guardian agents’ will account for anywhere between 10 to 15% of the broader agentic AI market by 2030. These agents are designed specifically to support and mediate interactions with AI agents, the consultancy explained.

“They function as both AI assistants, supporting users with tasks like content review, monitoring and analysis, and as evolving semi-autonomous or fully autonomous agents, capable of formulating and executing action plans as well as redirecting or blocking actions to align with predefined agent goals,” according to Gartner.

The rise of these guardian agents comes amid growing in agentic AI, the consultancy found.

In a poll of CIOs and IT leaders, 24% of respondents said they have already deployed “a few” AI agents, or less than a dozen. Just 4% revealed they’d deployed over that number, the survey found.

The poll found that 50% of respondents were currently researching or experimenting with the technology, however, underlining the growing interest among tech leaders. An additional 17% said they plan to deploy AI agents by the end of 2026.

Avivah Litan, VP Distinguished Analyst at Gartner, said the projected uptake of AI agents means many enterprises need to implement robust guardrails. With this in mind, deploying agents designed specifically for governance-related tasks could be the go-to approach.

“Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails,” Litan said.

“Guardian agents leverage a broad spectrum of agentic AI capabilities and AI-based, deterministic evaluations to oversee and manage the full range of agent capabilities, balancing runtime decision making with risk management.”

Agentic AI security threats are looming

According to Gartner polling, agents will likely be deployed across a wide range of business functions in the year ahead – particularly in areas such as IT, accounting, and human resources.

But while these are designed to support and drive productivity, there are key security considerations that tech leaders need to be wary of.

“As use-cases for AI agents continue to grow, there are several threat categories impacting them, including input manipulation and data poisoning, where agents rely on manipulated or misinterpreted data,” the consultancy said.

Credential hijacking was identified as a major threat faced by enterprises deploying AI agents, while agentic interaction which “fake or criminal websites and sources” could result in poisoned actions.

“The rapid acceleration and increasing agency of AI agents necessitates a shift beyond traditional human oversight,” said Litan.

“As companies move towards complex multi-agent systems that communicate at breakneck speed, humans cannot keep up with the potential for errors and malicious activities.

“This escalating threat landscape underscores the urgent need for guardian agents, which provide automated oversight, control, and security for AI applications and agents.”

What CIOs need to consider when using ‘guardian agents’

Gartner said CIOs, security leaders, and AI practitioners should focus on three distinct types of ‘guardian agents’ to improve safety and security.

These include ‘reviewers’, which could be used to identify and review AI-generated outputs and content for “accuracy and acceptable use”.

‘Monitor’ agents are designed to observe and track AI and agentic actions for human workers while ‘protectors’ can adjust or block actions and permissions during operations.

“Guardian agents will manage interactions and anomalies no matter the usage type,” the consultancy said. “This is a key pillar of their integration, since Gartner predicts that 70% of AI apps will use multi-agent systems by 2028.”

MORE FROM ITPRO


Source link
Exit mobile version