The current impact of AI on cyber incidents for everyday businesses
In day-to-day breach response, it is rarely possible to definitively attribute the use of AI by threat actors. In practice, attacks continue to present through familiar vectors – most commonly phishing and/or the exploitation of weak access controls, particularly in relation to remote access and VPN configurations. It is unclear to what extent, if at all, the identification and exploitation of these vulnerabilities have been assisted by AI on a case-by-case basis.
This lack of clarity is not just a technical issue. It means that the involvement of AI in individual breaches will not, at the moment, tend to be a factor in insurance coverage decisions, regulatory assessments or disputes. These continue to turn on control failures rather than the sophistication of the attack.
Against this backdrop, headlines around the growing sophistication of AI for potential use in cyber-attacks can sound either alarmist or primarily relevant only at the level of nation state infrastructure – not to everyday businesses and their insurers.
Recent legislative and regulatory trends have also shown a particular focus on infrastructure. On 12 November 2025, the Cyber Security and Resilience Bill (the CS&R Bill) was introduced to Parliament. The CS&R Bill builds on the Network and Information Systems Regulations 2018 by extending its scope to managed service providers and data centres, introducing mandatory incident reporting, and granting powers to impose more detailed security requirements through secondary legislation.
In parallel, the Government Cyber Action Plan aims to establish a new Government Cyber Unit within the Department for Science, Innovation and Technology and to set clearer minimum security standards across the public sector.
A focus on infrastructure and the public sector is understandable – if the level of cyber-attacks is increasing, a sensible starting point is to ensure protection of assets that have the highest risk of impact if breached. It is reasonable to prioritise concentration risk over frequency risk.
However, this focus can also result in a potential structural gap. As currently framed, the CS&R Bill would not apply to large parts of the private sector. But recent events have already demonstrated that a range of cyber-attacks on a range of businesses can generate systemic disruption.
Set against that uncertainty, breach response experience consistently shows: social engineering is becoming more convincing; vulnerabilities in VPN and firewall configurations are being exploited en masse; and potential supply chain impact across sectors is increasing. The proliferation of AI is only likely to reinforce and accelerate this general trend. Whether it is proven on a case-by-case basis or not, AI-driven exploitation increases the likelihood that such vulnerabilities are identified and acted upon quickly and at scale.
The practical effect is that, at present, for everyday businesses, AI is best understood as an accelerant of existing attack methods rather than a distinct category of cyber risk.
The result is that more now than ever, there is a need for a strategic approach to cyber risk in all businesses – not just those that make up infrastructure. Areas of focus for those looking to evaluate cyber risk should include:
- Organisational cyber governance and operational resilience. This includes board-level engagement supported by contemporaneous records of decision-making, clear incident response frameworks, and tested escalation procedures.
- A focus on baseline controls – those identified by the AI Security Institute in recent commentary on AI-related cyber risks include timely patching, robust access management and comprehensive logging.Multi-factor authentication began as an industry recommendation and has progressively become an expectation.
- Alignment with government recommendations, including guidance from the National Cyber Security Centre and schemes such as Cyber Essentials.
- Assessing and understanding supply chain risk - particularly in relation to contractual allocation of cyber responsibilities, notification obligations and indemnity provisions.
- Record-keeping as to what was known, what was decided, and what was implemented to ensure cyber resilience before any incident occurs.
These are not headline-grabbing suggestions. They do not speak to the potential breakdown of infrastructure as malicious AI agents run wild. But our experience suggests that the near-term impact of AI for day-to-day businesses is unlikely to be a new category of cyber incident. It is more likely to be an acceleration and greater likelihood of existing points of failure being exploited. For now, covering these key, not-so-glamorous, steps seems to be the best way of staying ahead of increasing AI-related cyber risk.
Stay connected and subscribe to our latest insights and views
Subscribe Here