Cyber_Bytes Issue 72

Published on 04 March 2025

Welcome to Cyber_Bytes, our regular round-up of key developments in cyber, tech and evolving risks.

RPC Cyber App: Breach Counsel at Your Fingertips

As cyber-attacks and follow-on litigation continue to be a board-level issue for organisations worldwide, the RPC Cyber App provides a one-stop-shop resource for cyber breach assistance and pre-breach preparedness. As well as information about RPC's cyber-related expertise, the app also contains guidance on prevention against common incidents and access to our ongoing cyber market insights.

RPC Cyber can be downloaded for free from the Apple Store or Google Play Store.

Government publishes response to its Call for Views on Cyber Security of AI

On 15 May 2024, the Department for Science, Innovation and Technology (DSIT) published its Call for Views on 'Cyber Security of AI' which outlined a proposed 'two-part intervention' approach, and 12 principles aimed at enhancing and maintaining cyber security standards for AI technology.

Following receipt of 123 responses to the Call for Views, the DSIT has now published a response paper which summarises respondent's key views and outlines the next steps. The salient points of the publication are that:

  • 80% of respondents were supportive of the 'two-part intervention' approach, which first involves the development of a voluntary Code of Practice (Code), and then using that Code for the subsequent development of a global standard focused on baseline cyber security requirements for AI models and systems.

  • There was overwhelming support for the 12 Principles outlined in the Code which included "Securing your Infrastructure" (Principle 6) and "Conduct appropriate testing and evaluation" (Principle 9).

  • Respondents noted more detail/guidance is needed to implement the Code and that the existing market might not provide sufficient skills or capabilities to implement the Code.

The DSIT states it has taken the feedback on board and used it to update the Code and create new implementation guidance. The DSIT will now take the Code and guidance to the European Telecommunications Standards Institute (ETSI) to develop a new global standard focused on baseline cyber security requirements, in line with the two-part approach set out above.

Click here to consider the DSIT's full response.

Amendments to the Data (Use and Access) Bill and comments from the ICO

The Data (Use and Access) Bill (DUA) which was introduced in October 2024 has recently been passed to the House of Commons (HoC) from the House of Lords (HoL). During the DUA's time at the HoL several key changes have been made to the Bill, including:

  • An amendment to Article 25 of the UK GDPR- Article 25 currently requires controllers to implement technical and organisational measures to ensure only necessary personal data is processed. The proposed amendment would require controllers handling children's personal data to consider newly introduced "higher protection matters" which require controllers to evaluate how to best protect/support children when implementing Article 25 measures.

  • An amendment to PECR 2003 which extends the "soft opt in" exemption for text and email marketing communications to charities.

  • A requirement for AI developers and operators of web crawlers to provide transparency information when requested, which demonstrates that UK copyright law is being adhered to when training AI models.

  • An amendment to the Sexual Offences Act 2003 which would introduce a criminal offence for creating sexual deepfakes without consent or reasonable belief of consent.

  • A requirement for the ICO to introduce Codes of Practice relating to AI automated decision making.

  • A requirement for the ICO to regulate transparency for web crawler use.

The ICO has responded to these amendments mostly in a positive light, whilst noting it would like clarity on the policy intent behind "higher protection matters" as specified in the first bullet point above. The ICO has stated it looks forward to discussing the changes which concern its new areas of responsibility with the government, so it can "properly assess and account for the implications".  

Click here to see the latest version of the DUA and click here to read the ICO's response.

Lecturers' trade union obtains default judgment and injunction against (unknown) threat actors

In University College Union v Persons Unknown [2025] EWHC 192 (KB), the High Court has granted summary judgment and issued a final injunction against a group of unknown threat actors following a ransomware incident. The injunction prohibits the threat actors from publishing, disclosing or using the stolen data, and orders the threat actors to deliver/up delete the information and provide a witness statement confirming compliance with the same.   

This judgment followed a ransomware attack which occurred in August 2024 and targeted University College Union (UCU), a lecturers' trade union. The incident saw the unknown threat actor group extract and publish sensitive information relating to UCU's employees and third parties on the deep and dark web. Shortly after the incident, UCU applied for an interim injunction which was granted. As there was no engagement from the unknown persons, the Court has now issued a final injunction.

This decision provides an example of the process for obtaining injunctions against unknown parties, including to a final injunction.

Click here to read more from ICLG News.

Google releases report on Adversarial Misuse of Generative AI

Google's threat intelligence group has recently released a report on misuse of its generative AI model (Gemini) by bad actors. Some of the key takeaways from the report are that:

  • Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities.

  • Advanced Persistent Threats (APT) relating to government-backed hacking activity used Gemini to support several phases of attack lifecycles.

  • Information Operations which attempt to influence online audiences in a deceptive, coordinated manner used Gemini for: research; content generation including developing personas and messaging; translation and localisation; and to find ways to increase reach.

  • Gemini's safety and security measures restricted content that would enhance adversarial capabilities.

Google says it is committed to maximising the positive benefits of AI to society while addressing the challenges and will continue to be guided by its AI Principles to ensure robust security measures. Google also highlights it has introduced the Secure AI Framework which consists of six key elements which all aim to keep AI systems safe and secure.

Click here to read Google's full report.

Qualified one-way costs shifting (QOCS) applies to wrongful disclosure of private information claim

In Birley and another (personal representatives of the Estate of Ms Rosa Taylor) v Heritage Independent Living Ltd and others [2025] EWCA Civ 44, the Court of Appeal held that QOCS applied to a mental health injury claim arising from a data breach.

The QOCS rules prevent Defendants from enforcing their litigation costs against unsuccessful Claimants and apply to personal injury claims. Introduced in the 2013 Jackson Reforms, QOCS aims to alleviate the need for After-the-Event (ATE) insurance premiums and protects Claimants from adverse cost orders, which many previously argued deterred individuals from bringing injury claims. In Birley, QOCS was applied despite the cost provisions for media claims, which allow for the recovery ATE insurance premiums and success fees also being applicable. 

The judgment is significant as it clarifies that (i) QOCS applies to all personal injury claims regardless of the method of injury and (ii) QOCS and media claims provisions which allow recovery of ATE premiums and success fees can exist in tandem

Click here to read more from Casemine.

Stay connected and subscribe to our latest insights and views 

Subscribe Here