Microsoft Copilot could have serious vulnerabilities after researchers reveal data leak issues in RAG systems


Researchers have discovered a huge potential problem in retrieval augmented generation (RAG) systems, the backend technology of tools such as Microsoft Copilot currently used today.

Based at the University of Texas, a group of five researchers claimed to have discovered a class of security vulnerabilities they dubbed ‘ConfusedPilot.’ They say these vulnerabilities can “confuse” Copilot for Microsoft 365 into committing confidentiality violations. 

Researchers described RAG models as being susceptible to the “confused deputy” problem, which is “where an entity in an enterprise without permission to perform a particular action can trick an over-privileged entity into performing this action on its behalf.”

They demonstrate two variations of vulnerability, in the first instance explaining how embedding malicious text into a modified prompt can corrupt the responses generated by the large language model (LLM).

Secondly, they explore a vulnerability that “leaks secret data” by leveraging the caching mechanism during retrieval, before investigating how these vulnerabilities in unison can be “exploited to propagate misinformation within the enterprise.” 

Notably, the threat of these attacks is most prevalent from an internal point of view. The report imagines the damage being done by an employee within the organization who leverages the vulnerabilities to gain access to information beyond their privileges. 

A malicious actor could, for example, create a fake sales report containing false information that affects Copilot’s decision-making. It could also contain further instructions for Copilot to act differently when it accesses it. 

The research team said the investigation highlighted the potential risks associated with RAG systems, raising serious questions for enterprise users of popular AI tools.

“While RAG-based systems like Copilot offer significant benefits to enterprises in terms of efficiency in their everyday tasks, they also introduce new layers of risk that must be managed,” they said. 

Andrew Bolster, senior research and development manager of data science at Synopsys, noted this is a vulnerability that has consequences for all RAG systems. 

“Copilot for Microsoft 365 is the demonstrated target for this attack, but it’s not alone in this threat model; these same attacks apply to many enterprise RAG system where there is permissive internal access to data that will be included in the global ‘RAG’,” Bolster told ITPro

“What information security leaders should take away from this paper is that while RAG is extremely powerful for leveraging generative AI systems against private, internal, or confidential enterprise data; any RAG system is only as good as the data that is made available to it,” he added. 

Data governance should take center stage to avoid problems

Generative AI adoption must be done in tandem with “thoughtful and well-structured” data governance regimes, Bolster noted. This will ensure the proper “separations” exist when it comes to accessible data that may impact the behavior of RAG systems for other users. 

“Much the same way that leaders establish verification and approval chains for public marketing publications or technology documentation; internal knowledge bases should maintain mechanisms for persisting data lineage and approval status for being included in global RAG,” Bolster said. 

He added that any leaders looking to generative AI should carefully consider feedback moving forward as the research itself doesn’t “fully close the loop” on the issue. 




Source link

Exit mobile version