Cyber professionals call for a ‘strategic pause’ on AI adoption as teams left scrambling to secure tools

More than a third of security leaders and practitioners admit that generative AI is moving faster than their teams can manage.
Almost half (48%) told penetration testing firm Cobalt that they’d like to have a ‘strategic pause’ to recalibrate their defenses against generative AI-driven threats – something they know they’re not likely to get.
More than seven-in-ten (72%) cited generative AI-related attacks as their top IT risk, but a third still aren’t conducting regular security assessments, including penetration testing, for their LLM deployments.
“Threat actors aren’t waiting around, and neither can security teams,” said Gunter Ollmann, CTO at Cobalt.
“Our research shows that while genAI is reshaping how we work, it’s also rewriting the rules of risk. The foundations of security must evolve in parallel, or we risk building tomorrow’s innovation on today’s outdated safeguards.”
Security leaders at C-suite and VP level are more concerned than practitioners about long-term generative AI threats such as adversarial attacks – an issue for 76%, compared with just 68% of security practitioners.
However, 45% of practitioners expressed concern about near-term operational risks such as inaccurate outputs, compared with only 36% of security leaders.
Security leaders are also more likely to consider changing how their team approaches cybersecurity defense strategies in light of the potential of generative AI-driven attacks, at 52% compared with 43% for practitioners.
Top concerns among all survey respondents included sensitive information disclosure, cited by 46%, model poisoning or theft, a worry for 42%, inaccurate data, an issue for 40%, and training data leakage, cited by 37%.
Similarly, half said they wanted more transparency from software suppliers about how they detect and prevent vulnerabilities, signaling a growing trust gap in the AI supply chain, the researchers said.
Many organizations lack the in-house expertise to adequately assess, prioritize, and remediate complex LLM-specific vulnerabilities.
This can lead to an over-reliance for fixes on third-party model providers or tool vendors – some of which may not prioritize these security issues as quickly or effectively as they should, particularly if the vulnerability lies within the foundational model itself.
LLM analysis uncovers worrying flaws
Analysis based on data collected during Cobalt pentests showed that while 69% of serious findings across all categories are resolved, this drops to just 21% of the high-severity vulnerabilities found in LLM pentests.
This is a concern, researchers said, given that 32% of LLM pentest findings are serious and is the lowest resolution rate across all test types the company conducts.
While the mean time to resolve (MTTR) for those serious LLM findings that are fixed is a rapid 19 days — the shortest MTTR across all pentest types – this is probably partly because organizations tend to prioritize quicker, and often simpler fixes.
“Much like the rush to cloud adoption, genAI has exposed a fundamental gap between innovation and security readiness,” said Ollmann.
“Mature controls were not built for a world of LLMs. Security teams must shift from reactive audits to programmatic, proactive AI testing — and fast.”
MORE FROM ITPRO
TOPICS
Source link