Security leaders are concerned about their employees feeding sensitive internal company data into generative AI tools, but don’t know how they can address the challenge.
On stage at Venafi’s 2024 Machine Identity Summit in Boston Jeff Hudson, CEO emeritus at Venafi cited a recent study in which 39% of respondents said they had used personal AI systems on company confidential information.
Hudson warned that employees are using consumer AI systems to help them streamline their workflow, without considering the consequences of feeding enterprise data into public models.
“We know people download ChatGPT and they have a terabyte or 100GB customer data file, and they say, ‘tell me which customers I have a go-to-market strategy for’. You can just see people doing it and they don’t know any different,” he explained.
The problem, according to Hudson, is that policing this is “too slippery” and that as soon as businesses say no to their staff, or block access to the platforms, they simply find ways to circumvent these measures.
Hudson asked a panel of CISOs at leading financial institutions in the US on how they were navigating this landscape fraught with potential privacy violations.
Togai Andrews, CISO at the Bureau of US Engraving and Printing, said he had been working on developing a governance policy to allow the use of generative AI technology in a responsible way but struggled to back up this policy with effective technical controls.
Andrews said this failure to enforce the policy was laid bare in a recent internal report on employee use of generative AI in the office, noting that he was virtually powerless to prevent it.
“A month ago I got a report that stated about 40% of our users were using [tools like] Copilot, Grammarly, or ChatGPT to make reports and to summarize internal documents, but I had no way of stopping it.”
He explained that as a result he has changed his approach to ensuring employees have a better grasp of the data risks associated with using such tools in their day-to-day workflow.
“What I’ve really turned my focus on is just education, awareness, and open engagement with all my user communities on how to responsibly use the technology because I think that’s the best I have at this point.”
Business should find ways to let users scratch their generative AI itch
Speaking to ITPro, Colin Soutar, Colin Soutar MD of risk & financial advisory at Deloitte & Touche LLP, said that he’s observed a lot of activity in businesses exploring deploying generative AI for internal use cases, but more caution when rolling it out to customers.
“I think there’s a lot of interest and a lot of ability to generate use cases, though there’s now a secondary wave of being a little cautious. A lot of the use cases being deployed are typically for internal processes, [bringing] optimization, efficiency, etc. which is great, but I think there’s a little bit of reluctance to put that out into customer’s hands, and I think that’s a good thing.”
Marco Maiurano, EVP and CISO at First Citizens Banks said he had to balance his attempts to prevent users accessing AI tools, driven by his organization’s regulatory environment, with the demand from internal customers who wanted to use them.
This led to Maiurano and his team creating a sandbox environment where they could build use cases with the adequate boundaries and controls in place.
“It allows folks to scratch their itch,” he added, giving users a better alternative to using personal agents like ChatGPT, which could put sensitive information at risk.
Hudson said this was the way he has seen businesses approaching AI’s data protection issues as there is no simple fix.
“I think that’s the way, at least for the hundreds of organizations I’ve talked to. That’s the way it works because if you get into the business of saying no to people, they go around you. It’s too slippery.”
Source link