Nearly half of developers using AI to support operations say their codebases are now largely AI-generated, new research shows.
A survey from Cloudsmith found 42% of developers admitted to having AI-filled codebases, with respondents noting that the use of AI has helped them markedly improve productivity and efficiency.
Yet despite the influx of AI-generated code, long-standing best practices are being overlooked, the study warned. Just over two-thirds (67%) of developers said they review code before deployments, raising concerns over software security.
Glenn Weinstein, CEO at Cloudsmith, said the use of AI in software development does present opportunities for development teams, but warned against placing complete faith in AI.
“Software development teams are shipping faster, with more AI-generated code and AI agent-led updates,” he said.
“AI tools have had a huge impact on developer productivity, which is great. That said, with potentially less human scrutiny on generated code, it’s more important that leaders ensure the right automated controls are in place for the software supply chain.”
The study noted that a growing number of developers are not only becoming reliant on AI-generated code, but are also placing a greater degree of trust in code written by AI tools.
Around 20% said they trust AI-generated code “completely”, the study found.
Notably, there are those in the profession taking a more considered approach to the use of AI in code generation. More than half (59%) said they apply additional scrutiny to AI-generated packages, for example, but a gap on enforcement is emerging at some enterprises.
Around 17% said they have no control policies in place over the use of AI in development processes, or for the use of AI-generated code. Similarly, roughly one-third (34%) noted they use tools that enforce policies specific to AI-generated packages, but this still leaves a glaring gap and could leave them open to threats.
The rise of generative AI and its use in software development has been mirrored by a significant rise in “AI-specific exploits”, Cloudsmith noted. Among those highlighted in the study were ‘slopsquatting’, whereby attackers weaponize hallucinated package names suggested by coding assistants.
Developers and security practitioners alike also voiced concerns over their ability to spot potential exploits of flaws, with just 29% stating they feel “very confident” in their ability to detect vulnerabilities.
This is particularly risky when working with open source libraries, the study warned, where AI tools are likely to draw suggestions.
AI-generated code is in vogue
The use of AI-generated code has become a big talking point in the tech industry over the last year, with some leading companies having turned to the trend to speed up development.
In November last year, Google CEO Sundar Pichai revealed that around a quarter of the tech giant’s internal source code was AI-generated, and that’s likely increased since then.
Speaking during an earnings call at the time, Pichai said Google was using AI across development teams both to speed up coding processes and to reduce manual toil for developers.
Notably, Pichai insisted that all AI-generated code was subject to robust safety checks by human workers. Engineers are often kept in the loop to review this code, he noted.
Microsoft has also jumped on the bandwagon in this regard. During an appearance at Meta’s LlamaCon conference in April, CEO Satya Nadella told Mark Zuckerberg up to 30% of its code was written with AI.
“I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,” Nadella told Zuckerberg.
Nadella expects the volume of AI-generated code at the company to also steadily increase in the coming years.
MORE FROM ITPRO
Source link