Hackers could dupe Slack’s AI features to expose private channel messages


Slack’s LLM-powered AI tool can be tricked into leaking sensitive data from private channels in a new prompt-engineering attack, according to security researchers.

A report from LLM security specialists PromptArmor details a potential pathway for cyber criminals to use indirect prompt injection techniques to manipulate Slack AI to disclose data from channels they are not a part of.

Slack AI is a feature built into the messaging platform that lets users query messages in natural language. Initially, the feature was limited to just processing messages in the channel, but as of August 14, the AI can also ingest information from uploaded documents and Google Drive files.

The attack requires the attackers to use specially crafted queries to force the model to behave in nefarious ways. 

This particular method relies on prompt injection techniques, whereby hackers leverage an LLM’s inability to distinguish between a system prompt created by a developer and the rest of the context that is appended to the query.

Researchers at PromptArmor said the flaw could enable threat actors to steal anything a user has put in a private channel without having to have access to the channel themselves, by posting in a public channel.

It noted that the victim does not even need to be a member of the public channel, just within the same workspace as the attacker.

Akhil Mittal, senior security consulting manager at Synopsys Software Integrity Group, said the flaw raises further questions around the safety of many AI tools, stressing they require stronger data protection measures than previously thought.

“This vulnerability shows how a flaw in the system could let unauthorized people see data they shouldn’t see. This really makes me question how safe our AI tools are,” he said. 

“It’s not just about fixing problems but ensuring these tools properly manage our data. As AI becomes more common, it’s important for organizations to keep both security and ethics in mind to protect our information and keep trust.”

Recent Slack AI update allows for indirect prompt engineering

PromptArmor noted insider threats were already a significant issue affecting the workplace collaboration platform, citing recent leaks from Disney, Uber, EA, Twitter, and more that have involved Slack.

The report noted the new vulnerability “just explodes the risk as now an attack does not even need access to the private channel or data within Slack to exfiltrate it.”

The attack is possible because of the way Slack processes queries. Slack user queries retrieve information from both public and private channels, with data being taken from channels the user isn’t even part of.

The report quoted Slack’s response to its initial disclosure of the flaw, which claimed this mechanism was working as intended.

“[M]essages posted to public channels can be searched for and viewed by all Members of the workspace, regardless if they are joined to the channel or not. This is intended behavior.”

PromptArmor further demonstrated how this behavior could allow a hacker to exfiltrate API keys that a developer has put in a private channel. However, researchers noted the data does not need to be an API key, and the attacker would not need to know exactly what confidential data exists in a specific channel to exfiltrate it.

The report also flagged a second, similar attack chain where Slack AI is manipulated into rendering a phishing link to the user in markdown with the text ‘click here to reauthenticate’.

This involves posting a malicious message in a public channel, which contains just themselves. The attacker can reference any individual message, allowing for a range of spear phishing attacks targeting specific executives.

In addition, the 14 August update to Slack AI that allows the tool to reference most types of files uploaded to Slack as well as normal messages. 

This now opens the door to indirect prompt injection, where instead of an attacker needing to post a malicious instruction in a Slack message, they may not even need to be in Slack.

“If a user downloads a PDF that has one of these malicious instructions (e.g. hidden in white text) and subsequently uploads it to Slack, the same downstream effects of the attack chain can be achieved,” the report stated.

In a statement provided to ITPro a spokesperson for Slack said the firm had deployed a patch to prevent attackers from exploiting the flaw.

“On August 20, 2024, a security researcher published a blog disclosing an issue affecting Slack AI which could have, under very limited circumstances, enabled a malicious actor with access to a Slack workspace to trick another user in that same workspace into sharing sensitive data by crafting a phishing link,” they said.

“Slack deployed a patch to ensure that malicious actors cannot exploit this issue and trick other users into sharing sensitive data. Based on currently available information, we have no indication that unauthorized access to customer data occurred. Should we become aware of any unauthorized access to customer data, we’ll notify affected parties without undue delay.”




Source link

Exit mobile version