Almost half of office workers say they’re using AI tools that aren’t provided by their employer, with nearly a third keeping it a secret.
For 36%, the reason is that they feel it gives them a secret advantage, while three-in-ten worry their job may be cut. More than a quarter say they’re suffering from AI-fueled imposter syndrome, saying they don’t want people to question their ability.
Findings from Ivanti’s 2025 Technology at Work Report: Reshaping Flexible Work show that the use of AI at work is rising, with 42% of employees now using the technology in their daily workflow in 2025.
This, the study noted, marks a significant increase compared to the year prior, in which just one-quarter said they use AI in their role.
IT professionals are even keener on AI, with three-quarters using it. But even though they’d be expected to be more aware of the security risks, 38% are still using unauthorized tools.
This growing trend of covert AI use is a serious cause for concern, Ivanti noted, and as such bosses need to begin cracking down on the practice.
“Employees are using AI tools without their bosses’ knowledge to boost productivity. It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,” said Brooke Johnson, Ivanti chief legal counsel and SVP of HR and security.
“Employees adopting this technology without proper guidelines or approval could be fueling threat actors, violating company contracts, and risking valuable company IP.”
Shadow AI could cause a security disaster
Ivanti warned that the use of unauthorized AI tools at work is putting many organizations at risk – and it isn’t the only study in the last year to emphasize the dangers.
Research from Veritas Technologies, for example, found that 38% of UK office workers said that they or a colleague had fed an LLM sensitive information such as customer financial data.
However, six-in-ten failed to realize that this could result in the leaking of confidential information and breach data privacy compliance regulations.
Meanwhile, analysis from BCS last year warned that staff using non-approved tools risk breaching data privacy rules, exposing themselves to potential security vulnerabilities, and even falling foul of intellectual property rights.
“To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications,” said Johnson.
“By fostering an open dialogue, employers can encourage transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively.”
A raft of major firms have already cracked down on the use of AI at work, most notably Apple, which implemented strict controls on the use of ChatGPT not long after it launched in late 2022.
Amazon and JP Morgan also implemented similar policies while Samsung took drastic action after discovering an accidental leak of sensitive information by an engineer who uploaded code to the popular chatbot.
But it’s not just a question of policies, said Johnson. Indeed, organizations need to do more to monitor whether they’re actually being implemented.
“Employees are using AI tools without their bosses’ knowledge to boost productivity,” she said.
“It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards.”
MORE FROM ITPRO
Source link