Security leaders are increasingly worried about AI-generated code – but feel they can’t prevent staff from using it


Almost all security leaders in the US, UK, Germany and France are worried that the use of AI-generated code within their organization could lead to a security incident according to a new report.

A majority (92%) of security leaders expressed concern over the extent to which developers are already using AI code within their companies, citing the questionable integrity of code produced with generative AI and a lack of oversight into when AI is being used.

Even as security leaders register their concern, data shows that AI-generated code is becoming commonplace. More than eight in ten (83%) organizations that took part in the survey are already using AI to generate code and 57% of respondents described using AI for coding assistance as standard practice.

The cybersecurity firm Venafi collected responses from 800 security decision-makers throughout the regions above for the new report Organizations Struggle to Secure AI-Generated and Open Source Code.

While 63% of respondents told Venafi that they had considered banning the use of AI in coding due to the security risks, almost three-quarters (72%) admitted they feel forced to permit AI code in order to keep pace with competitors.

Data from GitHub released in August suggests that AI coding tools are already saving developers time, with productivity and code quality improving at the majority of firms surveyed by the developer platform.

But security professionals are concerned that speed might come at the expense of quality with AI-generated code and face a dilemma over how to control the potential risks of developers using AI tools to handle sensitive code.

Almost two-thirds of respondents (63%) stated that it’s already impossible for security teams to track or effectively police AI-powered developers. Shadow AI, the process of employees using AI tools without permission, can be hard to spot or prevent without clear policies to prevent AI misuse. Only 47% of respondents reported having such policies in place.

As result, security leaders feel they are losing control and that businesses are being put at risk and this is having a direct negative impact on their mental health: 59% of respondents told Venafi they lose sleep over the risks AI poses.

“The recent CrowdStrike outage shows the impact of how fast code goes from developer to worldwide meltdown,” said Kevin Bocek, chief innovation officer at Venafi.

“Code now can come from anywhere, including AI and foreign agents. There is only going to be more sources of code, not fewer. Authenticating code, applications and workloads based on its identity to ensure that it has not changed and is approved for use is our best shot today and tomorrow.”

The risks of overreliance on AI and open source

Bocek described developers as “between a rock and a hard place” when it comes to deciding whether or not to use AI. He stated that while there’s a clear need to check code, those creating software won’t want to go back to a world without AI assistance.

“Developers are already supercharged by AI and won’t give up their superpowers. And attackers are infiltrating our ranks – recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg.

“Anyone today with a [large language model (LLM)] can write code, opening an entirely new front. It’s the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents, or someone in finance getting code from an LLM trained on who knows what.”

Security leaders’ main concerns center around fears that developers could become over-reliant on AI, leading to lower standards, that AI-written code won’t be properly quality checked, and that AI will use dated open source libraries that haven’t been well-maintained.

While 90% of security leaders reported trust in open source libraries, 75% said it’s impossible to verify the security of every line of open source code. Code signatures were suggested as a potential solution to these open source supply chain concerns, which could have a knock-on effect of improving training data for AI.

“In a world where AI and open source are as powerful as they are unpredictable, code signing becomes a business’ foundational line of defense,” said Bocek.

“But for this protection to hold, the code signing process must be as strong as it is secure. It’s not just about blocking malicious code — organizations need to ensure that every line of code comes from a trusted source, validating digital signatures against and guaranteeing that nothing has been tampered with since it was signed.”


Source link
Exit mobile version