Featured Article : AI-Generated Code Blamed for 1-in-5 Breaches

Providing IT support and solution to small and medium businesses. Servicing Edinburgh, Livingston, Fife and surrounding areas. Responsive, Flexible, Professional and friendly local support.

Featured Article : AI-Generated Code Blamed for 1-in-5 Breaches

A new report has revealed that AI-written code is already responsible for a significant share of security incidents, with one in five organisations suffering a major breach linked directly to code produced by generative AI tools.

Vulnerabilities Found in AI Code

The finding comes from cybersecurity company Aikido Security’s State of AI in Security & Development 2026, which features the results of a wide-ranging survey of 450 developers, AppSec engineers and CISOs across Europe and the US.

According to the study, nearly a quarter of all production code (24 per cent) is now written by AI tools, rising to 29 per cent in the US and 21 per cent in Europe. However, it seems that adoption has come at a cost. For example, the report shows that almost seven in ten respondents said they had found vulnerabilities introduced by AI-generated code, while one in five reported serious incidents that caused material business impact. As Aikido’s researchers put it, “AI-generated code is already causing real-world damage.”

Worse In The US

According to the report, the US appears to be hit hardest. For example, 43 per cent of US organisations reported serious incidents linked to AI-generated code, compared with just 20 per cent in Europe. The report says this is due to stronger regulatory oversight and stricter testing practices in Europe, where companies tend to catch problems earlier. European respondents recorded more “near misses”, indicating that vulnerabilities were identified before they could cause harm.

AI Changing The Development Landscape

AI coding assistants such as GitHub Copilot, ChatGPT and other generative tools are now integral to the software pipeline, promising faster output and fewer repetitive tasks, but they also introduce a new layer of risk.

Aikido’s data highlights that productivity gains can be offset by increased complexity and slower remediation. For example, teams now spend an average of 6.1 hours per week triaging alerts from security tools, with most of that time wasted on false positives. In larger environments, the triage burden grows to nearly eight hours a week where teams rely on multiple tools.

Leads To Dangerous Shortcuts

It seems that this problem can lead to dangerous shortcuts. For example, two-thirds of respondents admitted bypassing or delaying security checks due to a kind of alert fatigue. Developers under pressure to deliver have started to “push through” security warnings, creating a cycle where quick fixes outweigh caution.

Natalia Konstantinova, Global Architecture Lead in AI at BP, highlights the issue, saying: “AI-generated code shouldn’t be fully trusted, since it can cause serious damage. This is a reminder to carefully double-check its outputs.”

Accountability Is Becoming A Flashpoint

It seems that as AI-generated code makes its way into production, one of the biggest challenges is determining who is responsible when things go wrong.

Aikido’s survey shows a clear divide. For example, 53 per cent of respondents said security teams would be blamed if AI code caused a breach, 45 per cent blamed the developer who wrote the code, and 42 per cent blamed whoever merged it into production. The result, according to UK insurance and pensions company Rothesay’s CISO Andy Boura, is “a lack of clarity among respondents over where accountability should sit for good risk management.”

In fact, half of developers said they expect to shoulder the blame personally if AI-generated code they produced led to an incident, suggesting a growing culture of uncertainty and mistrust between teams.

The blurred lines are also fuelling tension between developers and security leaders. Many security professionals worry that AI-assisted development is moving too fast for proper oversight, while developers argue that outdated review processes are slowing down innovation.

“Tool Sprawl” Is Making Things Worse

Perhaps surprisingly, Aikido’s research found that organisations with more security tools were actually experiencing more security incidents. For example, companies using six to nine different tools reported incidents 90 per cent of the time, compared with 64 per cent for those using just one or two.

It seems this “tool sprawl” is also linked to slower fixes. Teams with multiple vendor tools took almost eight days on average to remediate a critical vulnerability, compared with just over three days in smaller, more consolidated setups.

The problem, according to Aikido, is not the tools themselves but the overhead they create, i.e., duplicate alerts, inconsistent data and fractured workflows that slow response times.

Walid Mahmoud, DevSecOps Lead at the UK Cabinet Office, notes about this issue: “Giving developers the right security tool that works with existing tools and workflows allows teams to implement security best practices and improve their posture.”

Teams using integrated, all-in-one platforms built for both developers and security professionals were twice as likely to report zero incidents compared with those using tools aimed at one group only.

Regional Differences In Oversight

The study draws a clear contrast between European and American approaches. For example, European teams tend to rely more on human oversight, manual reviews and compliance-based testing frameworks, while US teams are quicker to automate processes and deploy AI-generated code at scale.

Aikido’s figures show that 58 per cent of US teams track AI-generated code line by line, compared with just 35 per cent in Europe. That difference, coupled with the higher level of automation in US pipelines, may explain why more AI-related vulnerabilities are being detected (and exploited) there.

As Aikido puts it, “Europe prevents, the US reacts.” The slower, more regulated approach across Europe appears to be reducing the number of major breaches, even if it creates extra workload for developers.

Independent Findings Support The Trend

It should be noted here that security concerns raised by Aikido are actually consistent with other recent studies. For example, Veracode’s 2025 GenAI Code Security Report found that 45 per cent of AI-generated code samples failed basic security tests. Java was the worst affected, with a 72 per cent failure rate, followed by JavaScript (43 per cent), C# (45 per cent) and Python (38 per cent).

The Veracode team concluded that while AI tools can generate functional code quickly, they often fail to account for secure design or contextual logic. Their analysis showed little improvement in security quality between model generations, even as syntax accuracy improved.

Policy researchers are also warning of deeper structural issues. For example, the Center for Security and Emerging Technology (CSET) at Georgetown University has outlined three categories of risk from AI-generated code, i.e., insecure outputs, vulnerabilities in the AI models themselves, and wider supply chain exposure.

Also, research from OX Security has pointed to what it calls the “army of juniors” effect, which is where AI tools can produce vast amounts of syntactically correct code, but often lack the architectural understanding of experienced developers, multiplying low-level errors at scale.

Industry Perspectives On A Path Forward

Despite these warnings, it seems that optimism remains widespread. For example, 96 per cent of Aikido’s respondents believe AI will eventually be able to produce secure, reliable code, with nearly half expecting that within three to five years.

However, only one in five think AI will achieve that without human oversight. The consensus is that people will remain essential to guide secure design, architecture and business logic.

AI Can Check AI

There also appears to be growing belief that AI should be used to check AI. For example, nine out of ten organisations expect AI-driven penetration testing to become mainstream within around five and a half years, using autonomous “agentic” systems to identify vulnerabilities faster than human testers could.

“The 79 per cent are the smart ones,” said Lisa Ventura, founder of the UK’s AI and Cyber Security Association. “AI isn’t about replacing human judgment, it’s about amplifying it.”

This sentiment echoes a wider industry move towards what security leaders call “augmented development”, i.e., human-centred workflows supported by automation, not replaced by it.

Why This Matters

For UK organisations, the implications are immediate. For example, the report shows that AI-generated code is not a future risk but a current operational issue already affecting production environments.

As Kevin Curran, Professor of Cybersecurity at Ulster University, says: “This demonstrates the slim thread which at times holds systems together, and highlights the need to properly allocate resources to cybersecurity.”

Aikido’s findings also underline the importance of developer education and clear accountability. Matias Madou, CTO at Secure Code Warrior, wrote that “in the AI era, security starts with developers. They are the first line of defence for the code they write, and for the AI that writes alongside them.”

For businesses already navigating compliance regimes such as the UK NCSC’s Cyber Essentials or ISO 27001, this means treating AI-generated code as a separate risk class requiring its own testing and review procedures.

Criticisms And Challenges

While Aikido’s report is one of the most comprehensive of its kind, it is not without its critics. For example, some security analysts argue that “one in five breaches” may overstate the influence of AI-generated code because correlation does not prove causation. Many breaches involve complex attack chains where AI code may only play a small role.

Others have questioned the representativeness of the sample. For example, the survey focused primarily on organisations already experimenting with AI in production, which may naturally skew toward higher exposure. Small or less digitally mature companies, where AI coding tools are still limited to pilot use, may experience fewer issues.

There are also some methodological challenges. For example, measuring what qualifies as “AI-generated” can be difficult, particularly when developers use AI assistants to autocomplete small code segments rather than entire functions. Attribution of vulnerabilities can therefore be subjective.

That said, even many of the sceptics agree that the report captures a growing and genuine concern. Independent findings from Veracode, OX Security and CSET all point in the same direction, i.e., that AI-generated code introduces new risks that traditional security pipelines were never designed to manage.

The challenge for developers and CISOs alike is, therefore, to close that gap before AI coding becomes the default, not the exception. As the technology matures, the balance between innovation speed and security assurance will define how safely businesses can harness AI’s potential without repeating the mistakes of early adoption.

What Does This Mean For Your Business?

The findings appear to point to an industry racing ahead faster than its safety systems can adapt. AI coding tools have clearly shifted from experimental to mainstream, yet governance and testing practices are still catching up. The evidence suggests that while automation can improve productivity, it cannot yet replicate the depth of human reasoning needed to identify design-level flaws or assess real-world attack paths. That gap between capability and control is where today’s vulnerabilities are being born.

For UK businesses, this raises practical questions about oversight and responsibility. Many already face pressure to adopt AI for competitive reasons, yet the report shows that without strong testing regimes and clear accountability, the risks can outweigh the benefits. In particular, financial services, healthcare and public sector organisations, which handle sensitive data and operate under strict compliance frameworks, will need to ensure that AI-generated code goes through the same, if not stricter, scrutiny as any other form of software.

Developers, too, are being asked to operate within new boundaries. The growing reliance on generative tools means the traditional model of code review and approval is no longer sufficient. UK companies may now need to invest in dedicated AI audit trails, tighter version tracking and security validation that can distinguish between human and machine-written code. The evidence from Aikido’s report also suggests that integrated platforms, where developer and security functions work together, can yield better results than fragmented tool stacks, making collaboration a critical priority.

For other stakeholders, including regulators and insurers, the implications are equally clear. For example, regulators will need to consider whether existing standards, such as Cyber Essentials, adequately address AI-generated components. Insurers may need to begin to factor the presence of AI-written code into risk assessments and premiums, especially if breach attribution becomes more traceable.

There is also a wider social and ethical dimension to consider here. For example, if AI-generated code becomes a leading cause of breaches, the question of accountability will soon reach the boardroom and, potentially, the courts. The current ambiguity over who is at fault, i.e., the developer, the CISO or the AI vendor, will not remain sustainable for long. Policymakers may be forced to define clearer lines of liability, particularly where generative AI is being deployed at scale in safety-critical systems.

The overall picture that emerges here is not one of panic but of adjustment. The technology is here to stay, and most industry leaders still believe it will eventually write secure, reliable code. The challenge lies in getting from here to there without compromising trust or resilience in the process. For now, it seems the safest path forward is not to reject AI in development, but to treat it with the same caution as any powerful, untested colleague: valuable, but never unsupervised.