RSAC Conference 2025 was a sobering reminder of the challenges facing cybersecurity professionals

RSAC Conference 2025 has come to an end, having once again acted as a leading platform for cybersecurity professionals to share the latest data, go into detail on new products and services, and debate some of the trickiest topics in their field.
Throughout the week-long event, attendees have heard about some of the nastiest ransomware groups currently operating, new benefits for defenders, as well as topics of concern that will bear fruit in the coming years.
Generative AI has, as predicted, dominated the discussion at RSAC Conference 2025, with a range of talks covering the extent to which it can empower cybersecurity teams and threat actors alike.
In keynote sessions, attendees were told how generative AI in security like Microsoft’s Security Copilot agents and Google’s Gemini security offerings can help cybersecurity analysts cut down their workloads and detect threats before they present an issue.
Throughout, however, it’s been evident that even the firms promoting AI uptake are keeping an eye on the potential for malicious use of AI and unforeseen AI harms. In particular, experts noted that the adoption of autonomous AI agents could come with great risks if not done right, even as the technology is used to empower cybersecurity teams.
“This is going to come with a whole new class of risks that we’ve never seen before, that we have to make sure we actually mitigate ourselves against,” said Jeetu Patel, EVP and chief product officer at Cisco in the event’s opening keynote.
It’s no surprise that the RSAC Conference 2025 audience would be approaching AI from a place of skepticism, or even downright cynicism. Cybersecurity professionals are suspicious of technological hype by their very nature and the first to question whether digital transformation projects will introduce new vulnerabilities to their environments.
This isn’t to say that the old stereotype of ‘the department of no’ is true. John ‘Four’ Flynn, VP of Security and Privacy at Google DeepMind, noted that AI developers such as his own are themselves closely monitoring potential safety flaws in AI models, and urged security teams in customer organizations to have a plan to monitor AI behavior post-deployment.
If anyone in your office is going to be asking how predictable, open, and abusable an AI model is, it’ll be your CISO. This sentiment is backed up by recent Exabeam research, which found a widening gap between executives and cybersecurity analysts when it comes to AI enthusiasm.
The disparity was on full display when respondents were asked how much AI has improved departmental productivity – 77% of executives said it had driven significant improvement versus just 22% of analysts – while also highlighting that while over half (53%) of executives thought AI would increase job security, under a fifth (19%) of analysts felt the same.
But this isn’t necessarily cause for despair. Throughout the event’s sessions, experts have acknowledged that the cybersecurity community simply has to come to terms with AI adoption given the rate at which companies are adopting it.
“Start using AI,” said Daniel Rohrer, VP of Software Product Security, Architecture & Research at Nvidia, adding that from the simplest Copilot use cases to more complex deployments of AI agents,
Contributing to a session focused on the security challenges associated with AI, Rohrer added that sometimes getting the ball rolling can be as simple as pairing cybersecurity employees with one’s data scientists, so they can compare notes and ensure their AI adoption is secure by design.
Talks at RSAC Conference 2025 appeared to deliver a tone of rugged optimism, a kind of ‘roll up your sleeves and seize tomorrow’ message that balanced very real AI anxieties with a sense that if the security community can get a handle on the technology now, they’ll have the upper hand on attackers for years to come.
Panelists stressed that achieving this will require rapid action, with Jade Leung, CTO at the UK AI Security Institute, warning that emerging AI threats are moving faster than some have anticipated.
Getting ahead will require collaboration as well as technical excellence, attendees were repeatedly reminded. That this is something cybersecurity professionals can get on board with is self-evident – indeed, RSAC Conference 2025 is itself a testament to the collaborative nature of the community.
A focus on the fundamentals
It’s easy to get swept up in AI hype and overly theoretical conversations about the future of cybersecurity. But RSAC Conference 2025 pulled off a good balance between these talks and more actionable, practical advice.
In keynote speeches by the likes of John Fokker, head of threat intelligence at Trellix, as well as cybersecurity stalwart Kevin Mandia, former CEO of Mandiant and founder of the cybersecurity VC firm Ballistic Ventures, attendees were brought back down to Earth.
Fokker was able to ground his keynote speech largely in his hands-on work tracking members of the Conti group, notorious for its destructive ransomware campaigns,
Taking a similar tack in discussion with author and former cybersecurity reporter Nicole Perlroth, Mandia focused largely on the threat posed by China-backed threat actors and the emerging attack methodology demonstrated by these state-sponsored groups.
After so many days of AI-focused discourse, it was refreshing to hear Mandia advocate for as low-tech a solution as you can get to manage these rising threats: good cyber hygiene.
The security stalwart admitted that in spite of what he’s been saying for years, recent data shows that many breaches are preventable via proactive patch management first and foremost. Though he added that identity management and AI will become increasingly important, this was a moment of welcome ‘eat your greens’ simplicity in an otherwise multifaceted week.
There’s a world-weariness that comes with security events, a mood that’s hardly surprising when you consider the pressure CISOs are under from constant attempts to breach their organizations. But this only goes to make those moments of real enthusiasm stand out even more – as has been the case for some of the talks on AI agents for security.
RSAC Conference 2026 will, inevitably, revisit many of the major themes from this year. While attendees will walk away more convinced than ever that attackers will try to take them on, they can also hold onto the fact that the entire community is working on these problems.
Source link