Agentic AI could be a blessing and a curse for cybersecurity


Agentic AI systems will “further revolutionize cyber criminal tactics,” according to new research from Malwarebytes.

In its 2025 State of Malware report, the security firm warned that businesses need to be prepared for AI-powered ransomware attacks. The firm specifically highlighted the threat posed by malicious AI agents that can reason, plan, and use tools autonomously.

The report claimed that up until this point, the impact of generative AI tools on cyber crime has been relatively limited. This is not because they cannot be used offensively, however. There have been notable examples of generative AI being used to generate phishing content and even produce exploits in limited cases.

But in the main, their use for offensive purposes has been in increasing the efficiency of attacks rather than introducing new capabilities or altering the underlying tactics used by hackers.

But this could all be about to change in 2025, according to Malwarebytes, which argued that agentic AI could help attackers to not only scale up the volume and efficiency of their attacks, but also strategize on how to compromise victims.

“With the expected near-term advances in AI, we could soon live in a world where well-funded ransomware gangs use AI agents to attack multiple targets at the same time,” Malwarebytes warned.

“Malicious AI agents might also be tasked with searching out and compromising vulnerable targets, running and fine-tuning malvertising campaigns or determining the best method for breaching victims”.

Use of offensive agentic AI could be years away

That isn’t to say agentic AI does not have defensive applications, and Malwarebytes noted that agentic AI could be used to address cybersecurity skills gaps that plague the industry.

As these systems become more capable, security teams will increasingly be able to hand off parts of their workload to the autonomous agents that can action them with minimal oversight.

“It is not far-fetched to imagine agents being tasked with looking out for supply-chain vulnerabilities, keeping a running inventory of internet-facing systems and ensuring they’re patched, or monitoring a network overnight and responding to suspicious EDR alerts,” the report argued.

ReliaQuest, which claimed to have launched the first autonomous AI security agent in September 2024, recently said its agent is capable of processing security alerts 20 times faster than traditional methods with 30% greater accuracy at picking out genuine threats.

Speaking to ITPro, Sohrob Kazerounian, distinguished AI researcher at AI security specialists Vectra AI, acknowledged the efficiency increases generative AI has already unlocked for threat actors, but agreed the more interesting shift will come in the future as they experiment with AI agents.

“In the near term, we will see attackers focus on trying to refine and optimize their use of AI. This means using generative AI to research targets and carry out spear phishing attacks at scale. Furthermore, attackers, like everyone else, will increasingly use generative AI as a means of saving time on their own tedious and repetitive actions,” he explained.

“But, the really interesting stuff will start happening in the background, as threat actors begin experimenting with how to use LLMs to deploy their own malicious AI agents that are capable of end-to-end autonomous attacks.”

But Kazerounian said the reality of cyber criminals integrating AI agents into their operations is still years away, as it will require a significant amount of fine-tuning and troubleshooting before these systems reach true efficacy.

“While threat actors are already in the experimental phase, testing how far agents can carry out complete attacks without requiring human intervention, we are still a few years away from seeing these types of agents being reliably deployed and trusted to carry out actual attacks,” he argued.

“While such a capability would be hugely profitable in terms of time and cost of attacking at scale, autonomous agents of this sort would be too error-prone to trust on their own.”

Regardless, Kazerounian said the industry should be getting ready for this eventuality, as it will require significant changes to the traditional approach to threat detection.

“Nevertheless, in the future we expect threat actors will create Gen AI agents for various aspects of an attack – from research and reconnaissance, flagging and collecting sensitive data, to autonomously exfiltrating that data without the need for human guidance. Once this happens, without signs of a malicious human on the other end, the industry will need to transform how it spots the signs of an attack.”


Source link
Exit mobile version