<rss xmlns:a10="http://www.w3.org/2005/Atom" version="2.0" xmlns:authors="https://www.rpclegal.com/people/" xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Ai</title><link>https://www.rpclegal.com/rss/ai/</link><description>RPC Ai RSS feed</description><language>en</language><item><guid isPermaLink="false">{0D39939D-F909-425F-9FD5-B58B1128E2A2}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/generative-ai-addressing-copyright/</link><title>Generative AI – addressing copyright</title><description><![CDATA[<p>When it comes to the interaction of AI and IP rights, bar a flurry of activity surrounding the inevitable outcome by the courts in the Thaler, Dabus case (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/ip/the-uksc-rules-that-ai-cannot-be-an-inventor/" target="_blank">here</a>) and the Court of Appeal's ruling on the potential for exclusion from patentability of artificial neural networks in the Emotional Perception case, most attention has been focused on copyright issues.  The main potentially thorny issues have all been extensively covered by the mainstream media. </p>
<p>As a quick recap, the issues are whether:</p>
<ul>
    <li>the way foundation models (FM) are trained using works from the internet infringes the copyright in the works of content creators such as authors, artists and software developers</li>
    <li>the outputs of FM infringe the copyright of content creators </li>
    <li>AI generated works are protectable</li>
</ul>
<p><strong>The problem with training data</strong></p>
<p>Copyright is a right that in the UK and EU subsists automatically when certain requirements are met. Copyright infringers must be found to have copied the whole of the copyright work, or part of it where that part is regarded as ‘substantial’. Both proof of copying from a copyright work and similarity are required to prove infringement.</p>
<p>Content creators such as news providers, authors, visual content agencies and other creative professionals allege that their work is being unlawfully used to train AI models. Some use of this material is expressly authorised, for example, Associated Press has previously announced that OpenAI has taken a licence of part of its text archive. However, the main thrust of the allegations by content creators is that millions of texts, parts of texts and other literary material and images have been scraped from publicly available websites without consent.  This scraped content used as an input to train and develop AI models is alleged to infringe their copyright and often their database rights. </p>
<p>The case of <em>Getty Images (US) Inc v Stability AI Ltd</em>, which went to trial in June 2025, is the most prominent case making these kinds of allegations in the UK (there is also a corresponding US action). Setting aside the arguments on territorial extent raised in that case (i.e. whether the training and development of Stable Diffusion took place within the UK or in another jurisdiction), the allegations of copyright and database right infringement relevant here were that Stability AI:</p>
<ul>
    <li>has downloaded and stored Getty Image's copyright works (necessary for encoding the content and other steps in the training process) on servers or computers in the UK during the development and training of Stable Diffusion</li>
    <li>infringed the communication to the public right by making Stable Diffusion available in the UK, where Stable Diffusion provides the means using text and/or image prompts to generate synthetic images that reproduce the whole or a substantial part of the copyright works.</li>
</ul>
<p>These claims of copyright and database right infringement formed a sizable chunk of the proceedings. They were withdrawn during the trial which ultimately ended up focusing on trade mark infringement. However, the copyright and database right allegations provide information on how these types of claims can be argued. In its claims, Getty alleged that Stable Diffusion was trained using subsets of the LAION-5B dataset, a dataset comprising 5.85 billion CLIP-filtered (Contrastive Language-Image Pre-training) image-text pairs, created by scraping links to photographs and videos, together with associated captions, from the web, including from Pinterest, WordPress-hosted blogs, SmugMug, Blogspot, Flickr, Wikimedia, Tumblr and the Getty Images websites. The LAION-5B dataset comprises around 5 billion links. The LAION subsets together comprise approximately 3 billion image-text pairs from the LAION-5B dataset. At the time of filing its original claim, Getty had identified around 12 million links in the LAION subsets to content on the Getty Images websites.</p>
<p>The training and use of FM has resulted in intense debate on the infringement questions and the adequacy of legislation and/or guidance on licensing. Other similar ongoing legal actions (all US based) include:</p>
<ul>
    <li>the New York Times action against OpenAI and Microsoft in the US, for unlawful use of journalistic (including behind paywall) content to train LLMs </li>
    <li>a class action filed against OpenAI by the Authors Guild (now consolidated with two other actions) and some big-name authors including George RR Martin, John Grisham, and Jodi Picoult, alleging that the training of ChatGPT infringed the copyright in the authors’ works of fiction</li>
    <li>in September 2025, AI company Anthropic agreed to pay $1.5bn to settle a US copyright class action over its use of pirated texts to train its AI model, Claude. The action was brought by authors who claimed that Anthropic had downloaded 465,000 books and other texts from “pirated websites." The settlement is awaiting court approval.</li>
</ul>
<p>One of the issues for publishers and content creators is that they are not being rewarded for the use of their content to train AI models and that use of LLMs such as ChatGPT disrupts the business model of consumers who search online via a search engine for content, no longer being directed to publications on their websites where the traffic attracts revenue made through digital advertising. This is because a search for digital content on many AI systems results in a direct response that stays within the LLM or image generation platform even though that response may be drawing from the same content that would have been revealed in search results in the search engine example. </p>
<p>In 2023, OpenAI provided written evidence to a UK committee inquiry into large language models including an explanation of its position on the use of copyright protected works in LLM training data. It explained that its LLMs, including the models that power ChatGPT, are developed using three primary sources of training data: (1) information that is publicly accessible on the internet, (2) information licensed from third parties (such as Associated Press), and (3) information from users or their human trainers. OpenAI acknowledged because "copyright today covers virtually every sort of human expression – including blog posts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials". OpenAI stressed that it was for creators to exclude their content from AI training and that it has provided a way to disallow OpenAI's “GPTBot” web crawler access to a site, as well as an opt-out process for creators who want to exclude their images from future DALL∙E training datasets. It also mentioned its partnerships with publishers like Associated Press. </p>
<p>In January 2024, in what might be interpreted as the beginning of a shift by AI providers, OpenAI's CEO Sam Altman at the World Economic Forum, Davos said that OpenAI was open to deal with publishers and that there's a need for "new economic models" between publishers and generative AI models. Licensing deals now appear to be becoming more prevalent. Examples include collaborations between OpenAI and each of: Condé Nast, Guardian Media Group and the Financial Times.  </p>
<p>When licensing negotiations break down there is a risk of legal action being taken, such as that reportedly being taken by Mumsnet against OpenAI. In its complaint against OpenAI, Mumsnet claims "scraping without permission is an explicit breach of our terms of use, which clearly state that no part of the site may be distributed, scraped or copied for any purpose without our express approval." As one of the few cases involving allegations of unlawful website scraping by AI developers in the UK, the outcome will be important in developing the law in this area – if the case reaches trial.  </p>
<p><strong>Training data issue resolution – UK government </strong></p>
<p><strong></strong>Since our March 2023 <a rel="noopener noreferrer" href="https://www.rpc.co.uk/perspectives/ip/generative-ai-and-intellectual-property-rights-the-uk-governments-position/" target="_blank">Generative AI and intellectual property rights</a> piece covering the UK's current position and reforms relating to a proposed commercial text and data mining exception, there has been no significant legal milestone reached on text and data mining in the UK:</p>
<ul>
    <li>In January 2024, the previous government's Culture, Media and Sport Committee confirmed that the government is no longer proceeding with its original proposal for a broad copyright exception for TDM. </li>
    <li>A voluntary code of practice (promised by the Intellectual Property Office "by summer 2023") to provide guidance to support AI firms in accessing copyright protected works as an input to their models and to provide protections (e.g. watermarking) on generated output,  did not materialise. </li>
    <li>In the February 2024 <a rel="noopener noreferrer" href="https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response" target="_blank">response</a> to its consultation on the 2023 AI whitepaper, the previous government acknowledged that the stalemate between AI companies and rights holders on the voluntary code of practice led the IPO to return the task of producing the code to the Department for Science Innovation and Technology (DSIT). DSIT and DCMS then reengaged with the AI and rights holder sectors without resolution of this complex global issue. </li>
    <li>In December 2024, the government launched a consultation proposing as a "preferred option" a new commercial data mining exception to support use at scale of a wide range of content by AI developers where rights have not been reserved. The exception would provide a mechanism for right holders to ‘opt-out’ (individually or collectively) and supporting measures on transparency to ensure developers are transparent about the works their models are trained on. Rights reservation would be by accessible and, where possible, standardised machine-readable formats (examples include "robots.txt," TDMRep, the International Standard Content Code (ISCC), and C2PA). The consultation closed on 25 February 2025 and the government is reviewing more than 11,500 responses. </li>
    <li>In January 2025, the House of Lords introduced a series of detailed amendments to the Data (Use and Access) Bill, during the bill's Report stage.  During the debate, the amendments were described by Baroness Kidron as designed to set out how a copyright regime would work: "<em>Amendment 61 would ensure that all operators of web crawlers must comply with UK law if they are marketed in the UK. Amendments 62 and 63 would require operators to be transparent about their identity and purpose, and allow creatives to understand if their content had been stolen. Amendment 64 would give enforcement powers to the ICO and allow for a private right of action by copyright holders. Amendment 44A would require the ICO to report on its enforcement record. Finally, Amendment 65 would require the Secretary of State to review technical solutions that might support a strong copyright regime.</em>" [Hansard]</li>
    <li>The Data (Use and Access) Bill was subject to a considerable amount of Parliamentary ping pong between the Lords and the Commons with the revised amendments each time being stripped out by the government. The Data (Use and Access) Act 2025, as enacted, does not set out a copyright and AI regime, however Part 7 of the Act requires the government to publish an economic impact assessment considering each of the four policy options described in the Copyright and AI consultation (s 135). The assessment must consider the impact on both copyright owners and AI system developers and users.  The government must also prepare and publish a report on the use of copyright works in the development of AI systems.  The report must include proposals on technical measures and standards that may be used to control the use of copyright works to develop AI systems and the accessing of copyright works for that purpose eg by web crawlers (s 136).</li>
</ul>
<p>Since March 2023, for large developers, collaboration through structured voluntary licensing agreements has continued to grow steadily and it has been reported that the government, as part of its plan, is looking to provide support to licensing markets for all sizes of developers.     </p>
<p><strong>In the EU</strong></p>
<p>The EU AI Act entered into force on 1 August 2024 and is largely applicable by August 2026. The text provides for general-purpose AI (GPAI) systems such as ChatGPT, and the GPAI models they are based on (such as OpenAI's GPT-4), to have to adhere to transparency requirements.  These include drawing up technical documentation explaining how the model has been trained, how it performs, how it should be used and its energy use, complying with EU copyright law (in particular to obtain authorisation from or enable content owners to opt out from the text and data mining of their content as provided for under the EU DSM Copyright Directive) and disseminating "sufficiently detailed" summaries about the content used for training GPAI including its provenance and curation methods. A voluntary code of practice designed to help GPAI model providers who sign up to demonstrate compliance with their obligations under the EU AI Act, was published and approved in the summer of 2025. The code consists of three chapters: Transparency and Copyright, both addressing all providers of general-purpose AI models, and Safety and Security, relevant only to a limited number of providers of the most advanced models.</p>
<p>Notably, on the question of GPAI model providers identifying and respecting opt out rights, this will be done under the EU AI Act using methods including "state of the art" technologies and: "<em>Any provider placing a general-purpose AI model on the Union market should comply with this obligation, <strong>regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place</strong>. This is necessary to ensure a level playing field among providers of general-purpose AI models where no provider should be able to gain a competitive advantage in the EU market by applying lower copyright standards than those provided in the Union.</em>" (EU AI Act, Recital 106)</p>
<p>The detailed summaries should be comprehensive enough to allow rights holders to be able to exercise and enforce their rights, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used. The EU AI office, responsible for the implementation and enforcement of the EU AI Act, will provide a template. </p>
<p>The EU AI Act obligations relating to providers of GPAI models' compliance with EU copyright laws are applicable from August 2025 (for new models, and August 2027 for GPAI models placed on the market before 2 August 2025). However, the exception for text and data mining provided for under the EU DSM Copyright Directive that allows content owners to opt out from bulk scraping of their online content is proving challenging to apply in practice. There is currently no standard protocol to enable machine readable "opt out" or to expressly reserve rights. </p>
<p>To assist with this issue, the European Commission is currently working on developing a licensing market for the training data used to train AI systems like ChatGPT.  Some copyright owners such as Sony Music, the Society of Authors and the Creators’ Rights Alliance are pre-empting the EU AI Act (while providing a warning within the UK) by publicly reserving their rights in relation to text and data mining via a statement on their website and/or in a letter to various companies including AI developers.</p>
<p><strong>The output of FMs – works created by users</strong></p>
<p>As well as the possibility of training data related copyright infringements explained above, the outputs of AI FM models such as ChatGPT or Midjourney generated as the result of user prompts may also provide grounds for copyright infringement of third party original works. </p>
<p>For example, if you are the author of an artwork and find a markedly similar copy has been generated by a user of a FM model without your permission, you will have to make your case on copyright infringement. In showing infringement, as a first step, proof of copying of features from the protected work is required.  Then the question is whether what has been taken constitutes all or a substantial part of the copyright work. A challenge with user generated works will be to show that the output work was derived from the original copyright protected work (did the AI provider introduce it in its training data, was it introduced during the fine tuning process or did a user provide it as one of its prompts). The EU AI Act somewhat provides for this (see above) by allowing the copyright owner to see if their work is contained in a particular data set. The UK has also indicated that transparency requirements are likely to be coming.</p>
<p>In this scenario the users of FM (and/or AI providers) face potential liability for copyright infringement.  These claims may be low value, and challenging to prove for rights holders so this might be a low risk, but it nevertheless produces risk for AI users and providers. Consequently, a number of key players (Microsoft, Google and OpenAI) now provide offers to indemnify certain (mainly enterprise) users if they are subsequently sued for copyright infringement. Microsoft's Customer Copyright Commitment states that if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, they will defend the customer and pay the amount of any adverse judgments or settlements that result from the legal proceedings, as long as the customer has used the guardrails and content filters built into their products. OpenAI's "Copyright Shield" promises to step in and defend their customers, and pay the costs incurred, if they face claims of copyright infringement. This applies to generally available features of ChatGPT Enterprise and their developer platform. Note: some of these indemnities may include carve-outs and liability caps. </p>
<p><strong>Protection for the outputs of AI FM models</strong></p>
<p>Most public facing generative AI models are accessed via a platform or website and are therefore subject to website terms and conditions. ChatGPT states that: "<em>Ownership of Content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.</em>" </p>
<p>What is actually being assigned is an important consideration for businesses and individuals. For example, there seems to be high use of AI FM in the advertising sector. If you, as a user, have produced marketing materials with the assistance of a FM you are likely to want to prevent their unauthorised use by third parties as a normal part of your business' brand/content protection strategy.  This would not normally be problematic if they are created without AI FM assistance – then the copyright likely belongs to the company concerned as the employer of the employee author. However, most jurisdictions, including the UK, require that copyright protection only applies to works created by human authors and if the work is solely computer generated there may be a subsistence issue. This is because authorship and ownership of copyright is tied into the concept of "originality", that is, protection is only extended to works categorised as "original literary, dramatic, musical or artistic works". The work may of course be attributed to the developer of the FM in circumstances where the user's role is confined to a single simple prompt and the FM has been finely tuned to produce marketing materials – in this situation there are likely to be terms that assign the developer's rights in works to the end user. </p>
<p>In this scenario, the section of the Copyright Designs and Patents Act 1988 (CDPA 1988) that grants protection to computer-generated works (CGWs) is often raised. Section 9(3) states that the author in the case of CGW is the person by whom "the arrangements necessary for the creation of the work are undertaken". The problem with this section relates to the date of the Act: 1988. What the legislators may have had in mind at this time is something like the use of computers as digital aids in cartography. Now, however, this section is being applied to GenAI models. <br />
<br />
However, since 1988, there have been some developments when it comes to "originality". The test for originality has changed and now to be an original work, works must be "the author's own intellectual creation" whereby an author has been "able to express their creative abilities in the production of the work by making free and creative choices so as to stamp the work created with their personal touch…" That definition is not very CGW/AI friendly. Where works are created by entering prompts into a GenAI system (i.e. using it merely as a tool) there would be room to apply the "author's own intellectual creation" originality test. However, literary, dramatic, musical or artistic CGW are more problematic under this originality test if a work has no human author. Therefore, in order to claim authorship and ownership, squeezing out the human element may be the best approach until clarification is provided from the UK government or the courts. The position is not clear cut though and if you are creating content for a client, the Ts & Cs relied on historically for human authored work may not be effective in transferring absolute ownership. </p>
<p>In November 2023, a Chinese court did find that an AI generated image, created using Stable Diffusion, satisfied the requirements of "originality" and was capable of copyright protection.  The Beijing Internet Court found that the image had been created (using AI as a tool) in a way that reflected the ingenuity and original intellectual investment of human beings. In February 2023 in a US case concerning authorship of the images contained within Kristina Kashtanova's work: Zarya of the Dawn, the US Copyright Office took a different approach. The images were developed using the generative AI tool Midjourney. By its own description Midjourney does not interpret prompts as specific instructions to create a particular expressive result (Midjourney does not understand grammar, sentence structure, or words like humans) it instead converts words and phrases “into smaller pieces, called tokens, that can be compared to its training data and then uses them to generate an image. The US copyright office decided that the images claimed were not original works of authorship protected by copyright because they were produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author (the designer modifying the images produced by the AI model using subsequent prompts and inputs was not sufficient to fulfil the requirement for human creativity). They were therefore removed from the U.S. Copyright Office register as not copyrightable. Because of the significant distance between what a user may direct Midjourney to create and the visual material Midjourney actually produces, the U.S. Copyright Office found that Midjourney users are deemed to lack sufficient control over generated images to be treated as the “master mind” behind them.</p>
<p>In January 2025, the US Copyright Office registered "A Single Piece of American Cheese", an image created entirely with AI generated material via a technique called “inpainting” (the process of selectively modifying or regenerating parts of an image while maintaining consistency with the surrounding elements). The work was initially rejected but was later registered on the basis of active “selection, coordination, and arrangement of material generated by artificial intelligence” into a unified composition. In part the decision seems to reflect the amount and quality of evidence of creative decision making in creating the image. In its application, the registrant highlighted: the multi-stage process, iterative refinement and creative decision making elements in creating the work.   <br />
<br />
<strong>How might these issues impact those developing and interacting with FM?</strong></p>
<p>This is a complex area and tricky to navigate in a commercial setting given that the UK and many other jurisdictions are failing to reach a position and provide guidance.  However, it is worth keeping up to date on and in mind the following live issues: </p>
<ul>
    <li>the risk surrounding the use of data sets  </li>
    <li>that there may be a need to disclose the contents of data sets under the EU AI Act and the UK framework</li>
    <li>who owns FM outputs? Is an AI output as protectable as a human created work? </li>
</ul>
<div> </div>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></description><pubDate>Mon, 22 Sep 2025 14:48:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Ciara Cullen, Joshy Thomas, Rory Graham</authors:names><content:encoded><![CDATA[<p>When it comes to the interaction of AI and IP rights, bar a flurry of activity surrounding the inevitable outcome by the courts in the Thaler, Dabus case (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/ip/the-uksc-rules-that-ai-cannot-be-an-inventor/" target="_blank">here</a>) and the Court of Appeal's ruling on the potential for exclusion from patentability of artificial neural networks in the Emotional Perception case, most attention has been focused on copyright issues.  The main potentially thorny issues have all been extensively covered by the mainstream media. </p>
<p>As a quick recap, the issues are whether:</p>
<ul>
    <li>the way foundation models (FM) are trained using works from the internet infringes the copyright in the works of content creators such as authors, artists and software developers</li>
    <li>the outputs of FM infringe the copyright of content creators </li>
    <li>AI generated works are protectable</li>
</ul>
<p><strong>The problem with training data</strong></p>
<p>Copyright is a right that in the UK and EU subsists automatically when certain requirements are met. Copyright infringers must be found to have copied the whole of the copyright work, or part of it where that part is regarded as ‘substantial’. Both proof of copying from a copyright work and similarity are required to prove infringement.</p>
<p>Content creators such as news providers, authors, visual content agencies and other creative professionals allege that their work is being unlawfully used to train AI models. Some use of this material is expressly authorised, for example, Associated Press has previously announced that OpenAI has taken a licence of part of its text archive. However, the main thrust of the allegations by content creators is that millions of texts, parts of texts and other literary material and images have been scraped from publicly available websites without consent.  This scraped content used as an input to train and develop AI models is alleged to infringe their copyright and often their database rights. </p>
<p>The case of <em>Getty Images (US) Inc v Stability AI Ltd</em>, which went to trial in June 2025, is the most prominent case making these kinds of allegations in the UK (there is also a corresponding US action). Setting aside the arguments on territorial extent raised in that case (i.e. whether the training and development of Stable Diffusion took place within the UK or in another jurisdiction), the allegations of copyright and database right infringement relevant here were that Stability AI:</p>
<ul>
    <li>has downloaded and stored Getty Image's copyright works (necessary for encoding the content and other steps in the training process) on servers or computers in the UK during the development and training of Stable Diffusion</li>
    <li>infringed the communication to the public right by making Stable Diffusion available in the UK, where Stable Diffusion provides the means using text and/or image prompts to generate synthetic images that reproduce the whole or a substantial part of the copyright works.</li>
</ul>
<p>These claims of copyright and database right infringement formed a sizable chunk of the proceedings. They were withdrawn during the trial which ultimately ended up focusing on trade mark infringement. However, the copyright and database right allegations provide information on how these types of claims can be argued. In its claims, Getty alleged that Stable Diffusion was trained using subsets of the LAION-5B dataset, a dataset comprising 5.85 billion CLIP-filtered (Contrastive Language-Image Pre-training) image-text pairs, created by scraping links to photographs and videos, together with associated captions, from the web, including from Pinterest, WordPress-hosted blogs, SmugMug, Blogspot, Flickr, Wikimedia, Tumblr and the Getty Images websites. The LAION-5B dataset comprises around 5 billion links. The LAION subsets together comprise approximately 3 billion image-text pairs from the LAION-5B dataset. At the time of filing its original claim, Getty had identified around 12 million links in the LAION subsets to content on the Getty Images websites.</p>
<p>The training and use of FM has resulted in intense debate on the infringement questions and the adequacy of legislation and/or guidance on licensing. Other similar ongoing legal actions (all US based) include:</p>
<ul>
    <li>the New York Times action against OpenAI and Microsoft in the US, for unlawful use of journalistic (including behind paywall) content to train LLMs </li>
    <li>a class action filed against OpenAI by the Authors Guild (now consolidated with two other actions) and some big-name authors including George RR Martin, John Grisham, and Jodi Picoult, alleging that the training of ChatGPT infringed the copyright in the authors’ works of fiction</li>
    <li>in September 2025, AI company Anthropic agreed to pay $1.5bn to settle a US copyright class action over its use of pirated texts to train its AI model, Claude. The action was brought by authors who claimed that Anthropic had downloaded 465,000 books and other texts from “pirated websites." The settlement is awaiting court approval.</li>
</ul>
<p>One of the issues for publishers and content creators is that they are not being rewarded for the use of their content to train AI models and that use of LLMs such as ChatGPT disrupts the business model of consumers who search online via a search engine for content, no longer being directed to publications on their websites where the traffic attracts revenue made through digital advertising. This is because a search for digital content on many AI systems results in a direct response that stays within the LLM or image generation platform even though that response may be drawing from the same content that would have been revealed in search results in the search engine example. </p>
<p>In 2023, OpenAI provided written evidence to a UK committee inquiry into large language models including an explanation of its position on the use of copyright protected works in LLM training data. It explained that its LLMs, including the models that power ChatGPT, are developed using three primary sources of training data: (1) information that is publicly accessible on the internet, (2) information licensed from third parties (such as Associated Press), and (3) information from users or their human trainers. OpenAI acknowledged because "copyright today covers virtually every sort of human expression – including blog posts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials". OpenAI stressed that it was for creators to exclude their content from AI training and that it has provided a way to disallow OpenAI's “GPTBot” web crawler access to a site, as well as an opt-out process for creators who want to exclude their images from future DALL∙E training datasets. It also mentioned its partnerships with publishers like Associated Press. </p>
<p>In January 2024, in what might be interpreted as the beginning of a shift by AI providers, OpenAI's CEO Sam Altman at the World Economic Forum, Davos said that OpenAI was open to deal with publishers and that there's a need for "new economic models" between publishers and generative AI models. Licensing deals now appear to be becoming more prevalent. Examples include collaborations between OpenAI and each of: Condé Nast, Guardian Media Group and the Financial Times.  </p>
<p>When licensing negotiations break down there is a risk of legal action being taken, such as that reportedly being taken by Mumsnet against OpenAI. In its complaint against OpenAI, Mumsnet claims "scraping without permission is an explicit breach of our terms of use, which clearly state that no part of the site may be distributed, scraped or copied for any purpose without our express approval." As one of the few cases involving allegations of unlawful website scraping by AI developers in the UK, the outcome will be important in developing the law in this area – if the case reaches trial.  </p>
<p><strong>Training data issue resolution – UK government </strong></p>
<p><strong></strong>Since our March 2023 <a rel="noopener noreferrer" href="https://www.rpc.co.uk/perspectives/ip/generative-ai-and-intellectual-property-rights-the-uk-governments-position/" target="_blank">Generative AI and intellectual property rights</a> piece covering the UK's current position and reforms relating to a proposed commercial text and data mining exception, there has been no significant legal milestone reached on text and data mining in the UK:</p>
<ul>
    <li>In January 2024, the previous government's Culture, Media and Sport Committee confirmed that the government is no longer proceeding with its original proposal for a broad copyright exception for TDM. </li>
    <li>A voluntary code of practice (promised by the Intellectual Property Office "by summer 2023") to provide guidance to support AI firms in accessing copyright protected works as an input to their models and to provide protections (e.g. watermarking) on generated output,  did not materialise. </li>
    <li>In the February 2024 <a rel="noopener noreferrer" href="https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response" target="_blank">response</a> to its consultation on the 2023 AI whitepaper, the previous government acknowledged that the stalemate between AI companies and rights holders on the voluntary code of practice led the IPO to return the task of producing the code to the Department for Science Innovation and Technology (DSIT). DSIT and DCMS then reengaged with the AI and rights holder sectors without resolution of this complex global issue. </li>
    <li>In December 2024, the government launched a consultation proposing as a "preferred option" a new commercial data mining exception to support use at scale of a wide range of content by AI developers where rights have not been reserved. The exception would provide a mechanism for right holders to ‘opt-out’ (individually or collectively) and supporting measures on transparency to ensure developers are transparent about the works their models are trained on. Rights reservation would be by accessible and, where possible, standardised machine-readable formats (examples include "robots.txt," TDMRep, the International Standard Content Code (ISCC), and C2PA). The consultation closed on 25 February 2025 and the government is reviewing more than 11,500 responses. </li>
    <li>In January 2025, the House of Lords introduced a series of detailed amendments to the Data (Use and Access) Bill, during the bill's Report stage.  During the debate, the amendments were described by Baroness Kidron as designed to set out how a copyright regime would work: "<em>Amendment 61 would ensure that all operators of web crawlers must comply with UK law if they are marketed in the UK. Amendments 62 and 63 would require operators to be transparent about their identity and purpose, and allow creatives to understand if their content had been stolen. Amendment 64 would give enforcement powers to the ICO and allow for a private right of action by copyright holders. Amendment 44A would require the ICO to report on its enforcement record. Finally, Amendment 65 would require the Secretary of State to review technical solutions that might support a strong copyright regime.</em>" [Hansard]</li>
    <li>The Data (Use and Access) Bill was subject to a considerable amount of Parliamentary ping pong between the Lords and the Commons with the revised amendments each time being stripped out by the government. The Data (Use and Access) Act 2025, as enacted, does not set out a copyright and AI regime, however Part 7 of the Act requires the government to publish an economic impact assessment considering each of the four policy options described in the Copyright and AI consultation (s 135). The assessment must consider the impact on both copyright owners and AI system developers and users.  The government must also prepare and publish a report on the use of copyright works in the development of AI systems.  The report must include proposals on technical measures and standards that may be used to control the use of copyright works to develop AI systems and the accessing of copyright works for that purpose eg by web crawlers (s 136).</li>
</ul>
<p>Since March 2023, for large developers, collaboration through structured voluntary licensing agreements has continued to grow steadily and it has been reported that the government, as part of its plan, is looking to provide support to licensing markets for all sizes of developers.     </p>
<p><strong>In the EU</strong></p>
<p>The EU AI Act entered into force on 1 August 2024 and is largely applicable by August 2026. The text provides for general-purpose AI (GPAI) systems such as ChatGPT, and the GPAI models they are based on (such as OpenAI's GPT-4), to have to adhere to transparency requirements.  These include drawing up technical documentation explaining how the model has been trained, how it performs, how it should be used and its energy use, complying with EU copyright law (in particular to obtain authorisation from or enable content owners to opt out from the text and data mining of their content as provided for under the EU DSM Copyright Directive) and disseminating "sufficiently detailed" summaries about the content used for training GPAI including its provenance and curation methods. A voluntary code of practice designed to help GPAI model providers who sign up to demonstrate compliance with their obligations under the EU AI Act, was published and approved in the summer of 2025. The code consists of three chapters: Transparency and Copyright, both addressing all providers of general-purpose AI models, and Safety and Security, relevant only to a limited number of providers of the most advanced models.</p>
<p>Notably, on the question of GPAI model providers identifying and respecting opt out rights, this will be done under the EU AI Act using methods including "state of the art" technologies and: "<em>Any provider placing a general-purpose AI model on the Union market should comply with this obligation, <strong>regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place</strong>. This is necessary to ensure a level playing field among providers of general-purpose AI models where no provider should be able to gain a competitive advantage in the EU market by applying lower copyright standards than those provided in the Union.</em>" (EU AI Act, Recital 106)</p>
<p>The detailed summaries should be comprehensive enough to allow rights holders to be able to exercise and enforce their rights, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used. The EU AI office, responsible for the implementation and enforcement of the EU AI Act, will provide a template. </p>
<p>The EU AI Act obligations relating to providers of GPAI models' compliance with EU copyright laws are applicable from August 2025 (for new models, and August 2027 for GPAI models placed on the market before 2 August 2025). However, the exception for text and data mining provided for under the EU DSM Copyright Directive that allows content owners to opt out from bulk scraping of their online content is proving challenging to apply in practice. There is currently no standard protocol to enable machine readable "opt out" or to expressly reserve rights. </p>
<p>To assist with this issue, the European Commission is currently working on developing a licensing market for the training data used to train AI systems like ChatGPT.  Some copyright owners such as Sony Music, the Society of Authors and the Creators’ Rights Alliance are pre-empting the EU AI Act (while providing a warning within the UK) by publicly reserving their rights in relation to text and data mining via a statement on their website and/or in a letter to various companies including AI developers.</p>
<p><strong>The output of FMs – works created by users</strong></p>
<p>As well as the possibility of training data related copyright infringements explained above, the outputs of AI FM models such as ChatGPT or Midjourney generated as the result of user prompts may also provide grounds for copyright infringement of third party original works. </p>
<p>For example, if you are the author of an artwork and find a markedly similar copy has been generated by a user of a FM model without your permission, you will have to make your case on copyright infringement. In showing infringement, as a first step, proof of copying of features from the protected work is required.  Then the question is whether what has been taken constitutes all or a substantial part of the copyright work. A challenge with user generated works will be to show that the output work was derived from the original copyright protected work (did the AI provider introduce it in its training data, was it introduced during the fine tuning process or did a user provide it as one of its prompts). The EU AI Act somewhat provides for this (see above) by allowing the copyright owner to see if their work is contained in a particular data set. The UK has also indicated that transparency requirements are likely to be coming.</p>
<p>In this scenario the users of FM (and/or AI providers) face potential liability for copyright infringement.  These claims may be low value, and challenging to prove for rights holders so this might be a low risk, but it nevertheless produces risk for AI users and providers. Consequently, a number of key players (Microsoft, Google and OpenAI) now provide offers to indemnify certain (mainly enterprise) users if they are subsequently sued for copyright infringement. Microsoft's Customer Copyright Commitment states that if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, they will defend the customer and pay the amount of any adverse judgments or settlements that result from the legal proceedings, as long as the customer has used the guardrails and content filters built into their products. OpenAI's "Copyright Shield" promises to step in and defend their customers, and pay the costs incurred, if they face claims of copyright infringement. This applies to generally available features of ChatGPT Enterprise and their developer platform. Note: some of these indemnities may include carve-outs and liability caps. </p>
<p><strong>Protection for the outputs of AI FM models</strong></p>
<p>Most public facing generative AI models are accessed via a platform or website and are therefore subject to website terms and conditions. ChatGPT states that: "<em>Ownership of Content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.</em>" </p>
<p>What is actually being assigned is an important consideration for businesses and individuals. For example, there seems to be high use of AI FM in the advertising sector. If you, as a user, have produced marketing materials with the assistance of a FM you are likely to want to prevent their unauthorised use by third parties as a normal part of your business' brand/content protection strategy.  This would not normally be problematic if they are created without AI FM assistance – then the copyright likely belongs to the company concerned as the employer of the employee author. However, most jurisdictions, including the UK, require that copyright protection only applies to works created by human authors and if the work is solely computer generated there may be a subsistence issue. This is because authorship and ownership of copyright is tied into the concept of "originality", that is, protection is only extended to works categorised as "original literary, dramatic, musical or artistic works". The work may of course be attributed to the developer of the FM in circumstances where the user's role is confined to a single simple prompt and the FM has been finely tuned to produce marketing materials – in this situation there are likely to be terms that assign the developer's rights in works to the end user. </p>
<p>In this scenario, the section of the Copyright Designs and Patents Act 1988 (CDPA 1988) that grants protection to computer-generated works (CGWs) is often raised. Section 9(3) states that the author in the case of CGW is the person by whom "the arrangements necessary for the creation of the work are undertaken". The problem with this section relates to the date of the Act: 1988. What the legislators may have had in mind at this time is something like the use of computers as digital aids in cartography. Now, however, this section is being applied to GenAI models. <br />
<br />
However, since 1988, there have been some developments when it comes to "originality". The test for originality has changed and now to be an original work, works must be "the author's own intellectual creation" whereby an author has been "able to express their creative abilities in the production of the work by making free and creative choices so as to stamp the work created with their personal touch…" That definition is not very CGW/AI friendly. Where works are created by entering prompts into a GenAI system (i.e. using it merely as a tool) there would be room to apply the "author's own intellectual creation" originality test. However, literary, dramatic, musical or artistic CGW are more problematic under this originality test if a work has no human author. Therefore, in order to claim authorship and ownership, squeezing out the human element may be the best approach until clarification is provided from the UK government or the courts. The position is not clear cut though and if you are creating content for a client, the Ts & Cs relied on historically for human authored work may not be effective in transferring absolute ownership. </p>
<p>In November 2023, a Chinese court did find that an AI generated image, created using Stable Diffusion, satisfied the requirements of "originality" and was capable of copyright protection.  The Beijing Internet Court found that the image had been created (using AI as a tool) in a way that reflected the ingenuity and original intellectual investment of human beings. In February 2023 in a US case concerning authorship of the images contained within Kristina Kashtanova's work: Zarya of the Dawn, the US Copyright Office took a different approach. The images were developed using the generative AI tool Midjourney. By its own description Midjourney does not interpret prompts as specific instructions to create a particular expressive result (Midjourney does not understand grammar, sentence structure, or words like humans) it instead converts words and phrases “into smaller pieces, called tokens, that can be compared to its training data and then uses them to generate an image. The US copyright office decided that the images claimed were not original works of authorship protected by copyright because they were produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author (the designer modifying the images produced by the AI model using subsequent prompts and inputs was not sufficient to fulfil the requirement for human creativity). They were therefore removed from the U.S. Copyright Office register as not copyrightable. Because of the significant distance between what a user may direct Midjourney to create and the visual material Midjourney actually produces, the U.S. Copyright Office found that Midjourney users are deemed to lack sufficient control over generated images to be treated as the “master mind” behind them.</p>
<p>In January 2025, the US Copyright Office registered "A Single Piece of American Cheese", an image created entirely with AI generated material via a technique called “inpainting” (the process of selectively modifying or regenerating parts of an image while maintaining consistency with the surrounding elements). The work was initially rejected but was later registered on the basis of active “selection, coordination, and arrangement of material generated by artificial intelligence” into a unified composition. In part the decision seems to reflect the amount and quality of evidence of creative decision making in creating the image. In its application, the registrant highlighted: the multi-stage process, iterative refinement and creative decision making elements in creating the work.   <br />
<br />
<strong>How might these issues impact those developing and interacting with FM?</strong></p>
<p>This is a complex area and tricky to navigate in a commercial setting given that the UK and many other jurisdictions are failing to reach a position and provide guidance.  However, it is worth keeping up to date on and in mind the following live issues: </p>
<ul>
    <li>the risk surrounding the use of data sets  </li>
    <li>that there may be a need to disclose the contents of data sets under the EU AI Act and the UK framework</li>
    <li>who owns FM outputs? Is an AI output as protectable as a human created work? </li>
</ul>
<div> </div>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{7883D715-9DC8-4A15-B137-324F13DC42D8}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/what-is-a-foundation-model/</link><title>What is a foundation model?</title><description><![CDATA[<p><strong>Foundation Models (FM), also often known as general purpose AI, are a type of AI technology that are trained on large datasets and are capable of carrying out a wide range of "general" tasks and operations. <sup>1</sup></strong></p>
<p>FMs can help businesses and individuals to improve communication, analyse data and automate tasks. They underpin the natural language processing chatbot ChatGPT, image generation tools such as Midjourney and the use of generative AI in productivity software.</p>
<p>The type of data a FM is exposed to during training will determine its 'type'.</p>
<ul>
    <li>Large language models (LLMs) are FMs which are trained on text data.</li>
    <li>Image generation models are FMs trained on image data (in addition to text data).</li>
    <li>Multi-modal FMs are FMs trained using several different data sources.</li>
</ul>
<p><strong>Developing and training FMs</strong></p>
<p>There are a number of steps required to develop, train and deploy a FM.</p>
<p><strong>Pre-training</strong></p>
<p>In the pre-training stage, the training data is collated from different sources and usually examined to allow for the extraction of harmful or irrelevant data. The data is commonly taken from publicly available sources through a process of web crawling, using open datasets and/or using proprietary data. </p>
<p>The datasets are then tokenised, which involves dividing the data into billions of small 'tokens'. In an LLM, each token may represent a word or parts of a word, whereas in image generation models, a token may represent smaller components of an image (such as a pixel). Through a process called 'self-attention', the model can weigh up the importance of each token and determine the probable relationships between them such as between the words cat, kitty and feline.</p>
<p>The model learns how to produce accurate outputs during the training process by using the parameters it is exposed to from the datasets to adjust its calculations. Sometimes referred to as "weights", a parameter is a connection chosen by the language model and learned during training. </p>
<p>FMs apply learned patterns from one task to another (transfer learning). </p>
<p><strong>Fine-tuning</strong></p>
<p>Fine-tuning is an optional next phase in which a pre-trained model receives additional capabilities or improvements using specific datasets. The main types are:</p>
<ul>
    <li>Alignment: fine-tuning a model to align its behaviour with the expectations or preferences of a human user for example to enable it to create music that matches certain moods. In order to prevent biased, false or harmful outputs, a machine learning technique known as reinforcement learning from human feedback (RLHF) is used to fine tune AI models by incorporating human preferences into their decision-making process. It involves humans rating or ranking the model's outputs, which is then used to train the model to produce responses that align more closely with human expectations or desired behaviours. RLHF trains the model to make decisions that maximise "rewards". The rewards function relies on humans feeding back which responses they prefer, to distinguish between wanted and unwanted behaviour. The response-rating preferences build a reward model that automatically estimates how high a human would score any given prompt response. This reward model is then applied to a (language) model to allow it to internally evaluate a series of responses and then select the response most likely to result in the greatest reward, optimising human preferences.  RLHF can be provided by paid contractors or directly from users.  Alignment is also used to teach the model to ‘speak like a machine’ so as not to mislead users. Examples of human-machine conversations from existing chatbots can be collated and used to fine-tune a pre-trained model to add this capability. </li>
    <li>Domain or task specific: fine-tuning a model to a specific domain or task using smaller, specialised datasets. For instance, a dataset containing legal documents could improve a model's ability to prepare legal documents or provide advice.</li>
    <li>Synthetic data: fine-tuning a model using artificially generated data, such as data from simulations, real data which has been artificially extended or new datasets created from existing AI models. While developers benefit from the lower cost of acquiring synthetic data at large scales (compared to human data), there is the potential risk of 'model collapse', where defective data from existing FM models pollute the generated synthetic data. </li>
</ul>
<p><strong>Inference</strong></p>
<p>At the inference stage of the development process, the user feeds new inputs into the model, which then uses its parameters to create a prediction. An inference refers to the process of models making predictions based on the new data received. It provides a test of how well the model can apply information learned during training to make a prediction.</p>
<p>For example, a customer using a customer service chatbot powered by an LLM such as ChatGPT inputs: "I need help with returning a damaged product." The chatbot then analyses the input, identifies key elements (e.g. "returning" and "damaged product"), and uses its trained parameters to predict the most appropriate response. It replies: "I’m sorry to hear that. Could you please provide your order number so I can assist you with the return process?" This demonstrates how the model applies its training to understand the context and generate a relevant, human-like response.</p>
<p><strong>Open vs closed source models</strong></p>
<p>FM developers may choose to develop and release a FM in either open or closed source. </p>
<p>An open-source FM model <span>such as Meta's LLaMA </span>can be shared widely and is free to use, subject to a licence (although the licence may prohibit commercial use). A licensee may be provided with the original FM code, model architecture, training data and potentially even the weights and biases, allowing them to mirror the training process and/or to fine-tune the FM without needing to go through the pre-training process.</p>
<p>A closed-source FM model <span>such as ChatGPT</span> is developed privately. Access to the model is usually restricted and controlled by those within the company, for example. Rather than releasing externally, these models are likely to be used for the company’s own initiatives and operations. </p>
<p><strong>Evaluation methods – testing the performance of FMs</strong></p>
<p>FM developers typically evaluate their own FMs to analyse their capabilities or to identify falsities in outputs. Different evaluation methods include:</p>
<ul>
    <li>Evaluating against static datasets of input-output pairs, which assess performance against a wide-range of criteria such as accuracy, multitask ability and robustness.</li>
    <li>Model based evaluation, where one or more other models are used to evaluate the FM. </li>
    <li>Using human raters who are asked or paid to carry out model specific evaluation tasks. This is considered to be the gold-standard for evaluation, but it can make comparison exercises across models and papers more difficult due to the tailored nature of the tasks and the different evaluation methods adopted by raters.</li>
    <li>The process of red teaming involves experts using deliberately misleading questions to identify faults. Red teaming is important because it can help to identify vulnerabilities or biases in the model.</li>
</ul>
<p><strong>How do businesses access FMs in downstream markets? </strong></p>
<p>There are already many areas where FMs are, or will likely become, incorporated into downstream markets. Downstream businesses can access FMs by:</p>
<p>Creating and developing a FM in-house to support the business' needs and objectives. Whilst this option offers businesses full control over an FM, the technical, costly and time-consuming development process means it is unfeasible for many businesses.</p>
<p>Collaborating with an established third-party FM provider to develop an existing FM. This may allow the business to fine-tune the FM with its own data and, in turn, take ownership over the fine-tuned FM. This would be a cheaper option but still requires businesses to invest money, time and expertise for the FM's development. </p>
<p>Purchasing application programming interface (API) access to a FM and FM deployment tools owned by a third party. This option is often much cheaper and faster to implement than developing a FM in-house. However, businesses will not have the opportunity to tailor the FM to business needs and will be reliant on a third-party product.</p>
<p>Offering a third-party FM plug-in to enhance services and extend functionality. For example, a business may opt to provide a plug-in which allows users to use FM based service such as ChatGPT. This is a very accessible option for businesses seeking to reap the benefits of incorporating an FM into its products and services without the cost, expertise and time required by other options.</p>
<p><em> </em></p>
<p><em><sup>1</sup> This summary contains public sector information licensed under the <a href="https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/">Open Government Licence v3.0</a>, specifically the Competition & Markets Authority's <a href="https://assets.publishing.service.gov.uk/media/65081d3aa41cc300145612c0/Full_report_.pdf">AI Foundation Models Initial Report</a> dated 18 September 2023.</em></p>
<p><em> </em></p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></description><pubDate>Wed, 11 Jun 2025 08:35:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Caroline Tuck, Joshy Thomas, Rory Graham</authors:names><content:encoded><![CDATA[<p><strong>Foundation Models (FM), also often known as general purpose AI, are a type of AI technology that are trained on large datasets and are capable of carrying out a wide range of "general" tasks and operations. <sup>1</sup></strong></p>
<p>FMs can help businesses and individuals to improve communication, analyse data and automate tasks. They underpin the natural language processing chatbot ChatGPT, image generation tools such as Midjourney and the use of generative AI in productivity software.</p>
<p>The type of data a FM is exposed to during training will determine its 'type'.</p>
<ul>
    <li>Large language models (LLMs) are FMs which are trained on text data.</li>
    <li>Image generation models are FMs trained on image data (in addition to text data).</li>
    <li>Multi-modal FMs are FMs trained using several different data sources.</li>
</ul>
<p><strong>Developing and training FMs</strong></p>
<p>There are a number of steps required to develop, train and deploy a FM.</p>
<p><strong>Pre-training</strong></p>
<p>In the pre-training stage, the training data is collated from different sources and usually examined to allow for the extraction of harmful or irrelevant data. The data is commonly taken from publicly available sources through a process of web crawling, using open datasets and/or using proprietary data. </p>
<p>The datasets are then tokenised, which involves dividing the data into billions of small 'tokens'. In an LLM, each token may represent a word or parts of a word, whereas in image generation models, a token may represent smaller components of an image (such as a pixel). Through a process called 'self-attention', the model can weigh up the importance of each token and determine the probable relationships between them such as between the words cat, kitty and feline.</p>
<p>The model learns how to produce accurate outputs during the training process by using the parameters it is exposed to from the datasets to adjust its calculations. Sometimes referred to as "weights", a parameter is a connection chosen by the language model and learned during training. </p>
<p>FMs apply learned patterns from one task to another (transfer learning). </p>
<p><strong>Fine-tuning</strong></p>
<p>Fine-tuning is an optional next phase in which a pre-trained model receives additional capabilities or improvements using specific datasets. The main types are:</p>
<ul>
    <li>Alignment: fine-tuning a model to align its behaviour with the expectations or preferences of a human user for example to enable it to create music that matches certain moods. In order to prevent biased, false or harmful outputs, a machine learning technique known as reinforcement learning from human feedback (RLHF) is used to fine tune AI models by incorporating human preferences into their decision-making process. It involves humans rating or ranking the model's outputs, which is then used to train the model to produce responses that align more closely with human expectations or desired behaviours. RLHF trains the model to make decisions that maximise "rewards". The rewards function relies on humans feeding back which responses they prefer, to distinguish between wanted and unwanted behaviour. The response-rating preferences build a reward model that automatically estimates how high a human would score any given prompt response. This reward model is then applied to a (language) model to allow it to internally evaluate a series of responses and then select the response most likely to result in the greatest reward, optimising human preferences.  RLHF can be provided by paid contractors or directly from users.  Alignment is also used to teach the model to ‘speak like a machine’ so as not to mislead users. Examples of human-machine conversations from existing chatbots can be collated and used to fine-tune a pre-trained model to add this capability. </li>
    <li>Domain or task specific: fine-tuning a model to a specific domain or task using smaller, specialised datasets. For instance, a dataset containing legal documents could improve a model's ability to prepare legal documents or provide advice.</li>
    <li>Synthetic data: fine-tuning a model using artificially generated data, such as data from simulations, real data which has been artificially extended or new datasets created from existing AI models. While developers benefit from the lower cost of acquiring synthetic data at large scales (compared to human data), there is the potential risk of 'model collapse', where defective data from existing FM models pollute the generated synthetic data. </li>
</ul>
<p><strong>Inference</strong></p>
<p>At the inference stage of the development process, the user feeds new inputs into the model, which then uses its parameters to create a prediction. An inference refers to the process of models making predictions based on the new data received. It provides a test of how well the model can apply information learned during training to make a prediction.</p>
<p>For example, a customer using a customer service chatbot powered by an LLM such as ChatGPT inputs: "I need help with returning a damaged product." The chatbot then analyses the input, identifies key elements (e.g. "returning" and "damaged product"), and uses its trained parameters to predict the most appropriate response. It replies: "I’m sorry to hear that. Could you please provide your order number so I can assist you with the return process?" This demonstrates how the model applies its training to understand the context and generate a relevant, human-like response.</p>
<p><strong>Open vs closed source models</strong></p>
<p>FM developers may choose to develop and release a FM in either open or closed source. </p>
<p>An open-source FM model <span>such as Meta's LLaMA </span>can be shared widely and is free to use, subject to a licence (although the licence may prohibit commercial use). A licensee may be provided with the original FM code, model architecture, training data and potentially even the weights and biases, allowing them to mirror the training process and/or to fine-tune the FM without needing to go through the pre-training process.</p>
<p>A closed-source FM model <span>such as ChatGPT</span> is developed privately. Access to the model is usually restricted and controlled by those within the company, for example. Rather than releasing externally, these models are likely to be used for the company’s own initiatives and operations. </p>
<p><strong>Evaluation methods – testing the performance of FMs</strong></p>
<p>FM developers typically evaluate their own FMs to analyse their capabilities or to identify falsities in outputs. Different evaluation methods include:</p>
<ul>
    <li>Evaluating against static datasets of input-output pairs, which assess performance against a wide-range of criteria such as accuracy, multitask ability and robustness.</li>
    <li>Model based evaluation, where one or more other models are used to evaluate the FM. </li>
    <li>Using human raters who are asked or paid to carry out model specific evaluation tasks. This is considered to be the gold-standard for evaluation, but it can make comparison exercises across models and papers more difficult due to the tailored nature of the tasks and the different evaluation methods adopted by raters.</li>
    <li>The process of red teaming involves experts using deliberately misleading questions to identify faults. Red teaming is important because it can help to identify vulnerabilities or biases in the model.</li>
</ul>
<p><strong>How do businesses access FMs in downstream markets? </strong></p>
<p>There are already many areas where FMs are, or will likely become, incorporated into downstream markets. Downstream businesses can access FMs by:</p>
<p>Creating and developing a FM in-house to support the business' needs and objectives. Whilst this option offers businesses full control over an FM, the technical, costly and time-consuming development process means it is unfeasible for many businesses.</p>
<p>Collaborating with an established third-party FM provider to develop an existing FM. This may allow the business to fine-tune the FM with its own data and, in turn, take ownership over the fine-tuned FM. This would be a cheaper option but still requires businesses to invest money, time and expertise for the FM's development. </p>
<p>Purchasing application programming interface (API) access to a FM and FM deployment tools owned by a third party. This option is often much cheaper and faster to implement than developing a FM in-house. However, businesses will not have the opportunity to tailor the FM to business needs and will be reliant on a third-party product.</p>
<p>Offering a third-party FM plug-in to enhance services and extend functionality. For example, a business may opt to provide a plug-in which allows users to use FM based service such as ChatGPT. This is a very accessible option for businesses seeking to reap the benefits of incorporating an FM into its products and services without the cost, expertise and time required by other options.</p>
<p><em> </em></p>
<p><em><sup>1</sup> This summary contains public sector information licensed under the <a href="https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/">Open Government Licence v3.0</a>, specifically the Competition & Markets Authority's <a href="https://assets.publishing.service.gov.uk/media/65081d3aa41cc300145612c0/Full_report_.pdf">AI Foundation Models Initial Report</a> dated 18 September 2023.</em></p>
<p><em> </em></p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{B064D7F5-16B4-4B04-AC28-444AA2C1B2BE}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/</link><title>Part 1 - UK AI regulation</title><description><![CDATA[<p><em>This is Part 1 of 'Regulation of AI </em></p>
<p>The UK government has consistently said that it would adopt a pro-innovation and business-friendly approach to regulating AI. There is currently no AI-specific legislation in the UK. The government has been preparing its AI Bill but no draft has been released as yet. The AI Bill was intended to target the "most advanced AI models" and make existing voluntary commitments between companies and the government legally binding but these plans have changed. The government now plans to introduce a comprehensive AI bill in the next parliamentary session that will address safety and copyright issues.</p>
<p>In the absence of the AI Bill, guidance can be found in the government's White Paper published on 29 March 2023 and updated in its response to the White Paper in February 2024. Key elements of the White Paper are:</p>
<ul>
    <li>Five values-focused cross-sectoral principles for regulators to interpret and apply within their respective domains, intended to promote responsible AI use (see <a href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/">The Ethics of AI – the Digital Dilemma</a> for more information about the principles)</li>
    <li>No new AI regulator – instead existing regulators, using context-specific approaches will guide AI development</li>
    <li>Central support and monitoring via a steering committee to coordinate regulators</li>
</ul>
<p>Four key regulators are leading the way on implementing the AI principles under the umbrella of the Digital Regulation Cooperation Forum (DRCF): the Information Commissioner's Office (ICO), Ofcom, the Competition and Markets Authority (CMA) and the Financial Conduct Authority. The DRCF had set up the AI and Digital Hub, via a pilot, to advise on AI regulatory compliance in a coordinated way. Its initial term terminated in April 2025.</p>
<p>These four regulators also have their own approach to AI regulation – see table below.</p>
<table style="width: 692.667px; height: 422px;">
    <tbody>
        <tr>
            <td><strong><span style="text-decoration: underline;"> ICO</span></strong></td>
            <td><span style="text-decoration: underline;"><strong>CMA</strong></span></td>
            <td><span style="text-decoration: underline;"><strong>FCA</strong></span></td>
            <td><strong><span style="text-decoration: underline;">Ofcom</span></strong></td>
        </tr>
        <tr>
            <td>
            <ul>
                <li style="text-align: left;"> Published a 4-part consultation series on generative AI and data protection, which it responded to in December 2024</li>
                <li style="text-align: left;"><span>In January 2025, announced that it will publish a single set of rules for those developing or using AI<br />
                </span></li>
            </ul>
            </td>
            <td style="text-align: left;">
            <ul>
                <li>Launched an initial review into AI models in May 2023</li>
                <li>In the second stage of its review in April 2024, published an update paper and update report on AI models<br />
                <br />
                <br />
                </li>
            </ul>
            </td>
            <td style="text-align: left;">
            <ul>
                <li>Launched the AI Lab to allow the FCA, firms and wider stakeholders to discuss AI</li>
                <li>With the Bank of England, published their third survey on AI and machine learning<br />
                <br />
                <br />
                <br />
                </li>
            </ul>
            </td>
            <td>
            <ul>
                <li style="text-align: left;">Will be implementing and enforcing the Online Safety Act as it applies to generative AI tools<br />
                <br />
                <br />
                <br />
                <br />
                <br />
                <br />
                <br />
                <br />
                </li>
            </ul>
            </td>
        </tr>
    </tbody>
</table>
<div> </div>
<p>Further policy, tools and guidance for organisations will come from bodies such as the: (i) AI Security Institute; (ii) the AI Policy Directorate; and (iii) the Responsible Technology Adoption Unit.  </p>
<p>In January 2025, the government published its AI Opportunities Action Plan to ramp up AI adoption across the UK. Key initiatives include new AI Growth Zones to build more AI infrastructure, increasing the public compute capacity 20x, and creating a new National Data Library to harness the value in public data and support AI development.</p>
<p> <span>Lastly, a Private Members' Artificial Intelligence (Regulation) <a href="https://bills.parliament.uk/publications/53068/documents/4030">Bill</a> was introduced to the House of Lords in March 2025. The Bill aims to create an AI Authority that would collaborate with relevant regulators to construct regulatory sandboxes for AI. </span></p>]]></description><pubDate>Wed, 11 Jun 2025 08:32:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Caroline Tuck, Ricky Cella</authors:names><content:encoded><![CDATA[<p><em>This is Part 1 of 'Regulation of AI </em></p>
<p>The UK government has consistently said that it would adopt a pro-innovation and business-friendly approach to regulating AI. There is currently no AI-specific legislation in the UK. The government has been preparing its AI Bill but no draft has been released as yet. The AI Bill was intended to target the "most advanced AI models" and make existing voluntary commitments between companies and the government legally binding but these plans have changed. The government now plans to introduce a comprehensive AI bill in the next parliamentary session that will address safety and copyright issues.</p>
<p>In the absence of the AI Bill, guidance can be found in the government's White Paper published on 29 March 2023 and updated in its response to the White Paper in February 2024. Key elements of the White Paper are:</p>
<ul>
    <li>Five values-focused cross-sectoral principles for regulators to interpret and apply within their respective domains, intended to promote responsible AI use (see <a href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/">The Ethics of AI – the Digital Dilemma</a> for more information about the principles)</li>
    <li>No new AI regulator – instead existing regulators, using context-specific approaches will guide AI development</li>
    <li>Central support and monitoring via a steering committee to coordinate regulators</li>
</ul>
<p>Four key regulators are leading the way on implementing the AI principles under the umbrella of the Digital Regulation Cooperation Forum (DRCF): the Information Commissioner's Office (ICO), Ofcom, the Competition and Markets Authority (CMA) and the Financial Conduct Authority. The DRCF had set up the AI and Digital Hub, via a pilot, to advise on AI regulatory compliance in a coordinated way. Its initial term terminated in April 2025.</p>
<p>These four regulators also have their own approach to AI regulation – see table below.</p>
<table style="width: 692.667px; height: 422px;">
    <tbody>
        <tr>
            <td><strong><span style="text-decoration: underline;"> ICO</span></strong></td>
            <td><span style="text-decoration: underline;"><strong>CMA</strong></span></td>
            <td><span style="text-decoration: underline;"><strong>FCA</strong></span></td>
            <td><strong><span style="text-decoration: underline;">Ofcom</span></strong></td>
        </tr>
        <tr>
            <td>
            <ul>
                <li style="text-align: left;"> Published a 4-part consultation series on generative AI and data protection, which it responded to in December 2024</li>
                <li style="text-align: left;"><span>In January 2025, announced that it will publish a single set of rules for those developing or using AI<br />
                </span></li>
            </ul>
            </td>
            <td style="text-align: left;">
            <ul>
                <li>Launched an initial review into AI models in May 2023</li>
                <li>In the second stage of its review in April 2024, published an update paper and update report on AI models<br />
                <br />
                <br />
                </li>
            </ul>
            </td>
            <td style="text-align: left;">
            <ul>
                <li>Launched the AI Lab to allow the FCA, firms and wider stakeholders to discuss AI</li>
                <li>With the Bank of England, published their third survey on AI and machine learning<br />
                <br />
                <br />
                <br />
                </li>
            </ul>
            </td>
            <td>
            <ul>
                <li style="text-align: left;">Will be implementing and enforcing the Online Safety Act as it applies to generative AI tools<br />
                <br />
                <br />
                <br />
                <br />
                <br />
                <br />
                <br />
                <br />
                </li>
            </ul>
            </td>
        </tr>
    </tbody>
</table>
<div> </div>
<p>Further policy, tools and guidance for organisations will come from bodies such as the: (i) AI Security Institute; (ii) the AI Policy Directorate; and (iii) the Responsible Technology Adoption Unit.  </p>
<p>In January 2025, the government published its AI Opportunities Action Plan to ramp up AI adoption across the UK. Key initiatives include new AI Growth Zones to build more AI infrastructure, increasing the public compute capacity 20x, and creating a new National Data Library to harness the value in public data and support AI development.</p>
<p> <span>Lastly, a Private Members' Artificial Intelligence (Regulation) <a href="https://bills.parliament.uk/publications/53068/documents/4030">Bill</a> was introduced to the House of Lords in March 2025. The Bill aims to create an AI Authority that would collaborate with relevant regulators to construct regulatory sandboxes for AI. </span></p>]]></content:encoded></item><item><guid isPermaLink="false">{857DAF83-B504-473E-8163-B9993A68EC82}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-role-of-ai-in-disputes/</link><title>The Role of AI in Disputes</title><description><![CDATA[<p><strong>A machine that understands language</strong></p>
<p>The key characteristic of LLMs that is relevant to dispute resolution in any context is their natural language processing capacity, i.e. their ability to "understand" words and concepts. This is the result of the initial training process that LLMs have gone through, where – in highly simplified terms – the models were trained to create a map of human language. This enables them to understand words, sentences and paragraphs and also draft the same. Fundamentally, this is a characteristic that does not require any further training, rather it works "out of the box", although further finetuning is of course possible.</p>
<p><strong>What LLMs can already do for disputes lawyers</strong></p>
<p>This significant technological advance means that dispute resolution lawyers will be able to make use of technology that is able to independently carry <span>out document-related tasks such as summarising documents, answering questions on a document set, categorising documents based on which issue they relate to, extracting names or figures from a document, preparing chronologies, etc. LLMs are also able to assist with drafting any kind of text with suitable prompting. LLMs are not limited to language – products that work with sound and are able to provide meeting summaries (not transcripts) of calls are already in use. LLMs can equally work with images, both still and live footage, if needed.</span></p>
<p><span> </span>The advantages are obvious as time-intensive and tedious tasks are plentiful in the litigation process. Especially in relation to disclosure, the heavy lifting can now be taken on by LLMs. They can assist with summarising calls, selecting documents for bundles, drafting timelines or interrogating sets of documents for specific issues. Commercially available AI tools can now conduct an entirely automated first-tier disclosure review: acquiring, pe-processing and transforming raw document data from pleadings, agreeing keywords and automatically reviewing an entire document set for first-tier relevance. Human legal professionals then step in to verify the first-tier results and conduct the second-tier review.</p>
<p>Drafting is another area where LLMs will be able to assist by providing first drafts of correspondence, questions for witnesses, or other documents. Generally, immense productivity gains can be expected from using this kind of technology, freeing up human time. Separately, LLMs that have been connected to legal research databases are coming onto the market now, which promise to fast-track complex legal research tasks.</p>
<p><strong>Real risks, and real mitigation strategies</strong></p>
<p><span>The risks in relation to this technology are very real, but possibly over-emphasised in a traditionally risk-averse profession. </span>Much has been made of the risk of hallucinations, i.e. the propensity of LLMs to occasionally provide answers that do not correspond to reality – they are making things up, or more accurately, providing an output based on a prediction that happens to not match reality. Risk mitigation strategies may well be sufficient to address this problem. The main ones are to force an LLM to cite documentary sources for every answer it gives, or to use so-called grounding techniques to check the output against a source of truth. Issues of bias in the training data will remain a problem, although guardrails attempt to mitigate this issue. In terms of security and data protection, special <span>enterprise versions of LLMs promise a secure environment for client data where nothing is used to train the underlying LLM.</span></p>
<p><span></span><strong>What is now and what lies ahead</strong></p>
<p><span>The advent of LLMs should be seen as a pivotal moment for dispute resolution lawyers. This is not a prediction of the far future – rather, everything discussed in this article is technology that is available right now. It is also likely that even more complex tasks will be performed by LLMs at a later point in time, although it is currently difficult to predict what this will look like exactly. LLMs are constantly being improved and developed at breakneck speed, and lawyers should take note that major new developments tend to appear in this space as a matter of weeks or months, rather than years.</span></p>]]></description><pubDate>Tue, 10 Jun 2025 15:32:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Daniel Hemming, Ricky Cella, Joshy Thomas</authors:names><enclosure url="https://www.rpclegal.com/-/media/rpc/redesign-images/thinking-tiles/wide/disputes-3---thinking-tile-wide.jpg?rev=953aab10ec3f421d9052d7e4eddf914f&amp;hash=5E6EBA95561892AF46235BD62E35CC9B" type="image/jpeg" medium="image" /><content:encoded><![CDATA[<p><strong>A machine that understands language</strong></p>
<p>The key characteristic of LLMs that is relevant to dispute resolution in any context is their natural language processing capacity, i.e. their ability to "understand" words and concepts. This is the result of the initial training process that LLMs have gone through, where – in highly simplified terms – the models were trained to create a map of human language. This enables them to understand words, sentences and paragraphs and also draft the same. Fundamentally, this is a characteristic that does not require any further training, rather it works "out of the box", although further finetuning is of course possible.</p>
<p><strong>What LLMs can already do for disputes lawyers</strong></p>
<p>This significant technological advance means that dispute resolution lawyers will be able to make use of technology that is able to independently carry <span>out document-related tasks such as summarising documents, answering questions on a document set, categorising documents based on which issue they relate to, extracting names or figures from a document, preparing chronologies, etc. LLMs are also able to assist with drafting any kind of text with suitable prompting. LLMs are not limited to language – products that work with sound and are able to provide meeting summaries (not transcripts) of calls are already in use. LLMs can equally work with images, both still and live footage, if needed.</span></p>
<p><span> </span>The advantages are obvious as time-intensive and tedious tasks are plentiful in the litigation process. Especially in relation to disclosure, the heavy lifting can now be taken on by LLMs. They can assist with summarising calls, selecting documents for bundles, drafting timelines or interrogating sets of documents for specific issues. Commercially available AI tools can now conduct an entirely automated first-tier disclosure review: acquiring, pe-processing and transforming raw document data from pleadings, agreeing keywords and automatically reviewing an entire document set for first-tier relevance. Human legal professionals then step in to verify the first-tier results and conduct the second-tier review.</p>
<p>Drafting is another area where LLMs will be able to assist by providing first drafts of correspondence, questions for witnesses, or other documents. Generally, immense productivity gains can be expected from using this kind of technology, freeing up human time. Separately, LLMs that have been connected to legal research databases are coming onto the market now, which promise to fast-track complex legal research tasks.</p>
<p><strong>Real risks, and real mitigation strategies</strong></p>
<p><span>The risks in relation to this technology are very real, but possibly over-emphasised in a traditionally risk-averse profession. </span>Much has been made of the risk of hallucinations, i.e. the propensity of LLMs to occasionally provide answers that do not correspond to reality – they are making things up, or more accurately, providing an output based on a prediction that happens to not match reality. Risk mitigation strategies may well be sufficient to address this problem. The main ones are to force an LLM to cite documentary sources for every answer it gives, or to use so-called grounding techniques to check the output against a source of truth. Issues of bias in the training data will remain a problem, although guardrails attempt to mitigate this issue. In terms of security and data protection, special <span>enterprise versions of LLMs promise a secure environment for client data where nothing is used to train the underlying LLM.</span></p>
<p><span></span><strong>What is now and what lies ahead</strong></p>
<p><span>The advent of LLMs should be seen as a pivotal moment for dispute resolution lawyers. This is not a prediction of the far future – rather, everything discussed in this article is technology that is available right now. It is also likely that even more complex tasks will be performed by LLMs at a later point in time, although it is currently difficult to predict what this will look like exactly. LLMs are constantly being improved and developed at breakneck speed, and lawyers should take note that major new developments tend to appear in this space as a matter of weeks or months, rather than years.</span></p>]]></content:encoded></item><item><guid isPermaLink="false">{4A4253D6-44C3-46C7-BE27-1D0EAFB30E7B}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/ai-and-privacy-10-questions-to-ask/</link><title>AI and Privacy – 10 Questions to Ask</title><description><![CDATA[<p>UK data protection law applies to AI as it would any other technology. </p>
<p>UK data protection law applies to AI as it would any other technology. Companies therefore need to ensure that they meet GDPR standards when processing personal data in the context of any AI used in their business. Although the over-arching legal framework is the same, the nature of AI technology means that businesses are presented with new privacy-related risks that need to be addressed. We set out in this section 10 key questions to ask yourself at the outset when developing or deploying AI solutions in your business. Additional general considerations for off-the-shelf AI solutions are set out <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/ai-as-a-service-key-issues/" target="_blank">here</a>.  <br />
<br />
<strong>1.<span> </span>What will your AI system do?</strong></p>
<p><strong></strong>It is a basic question – but you need to understand the purpose of your AI system to understand why and how you will be processing personal data in the context of the system. The ICO considers AI-related processing to be high risk and therefore a data protection impact assessment (DPIA) should be carried out to assess the privacy-related risks posed by your system and your proposed mitigations. </p>
<p>Where you are exploring the use of AI or engaging in pilot projects, the purpose of your AI system may not be apparent or may change over time. Similarly, your purposes for processing personal data may change over the lifecycle of the AI system, for example, training the system vs using the system to make decisions. At the same time, you need to consider the lawful basis for processing data in your AI system, at each stage of the AI system, and for each category of personal data processed by your AI system. This, too, can change as your AI system evolves such that you may need a different lawful basis or, if you were relying on the legitimate interests lawful basis, you may now need to perform a new legitimate interests assessment. Updating your DPIA regularly is a good way to capture changes and assess the impact of such changes on your compliance. </p>
<p>You should also consider whether the data processing carried out by your system is necessary and proportionate, or if there are potentially more privacy-preserving alternatives. Ultimately, your system should incorporate data protection by design and by default.<br />
<br />
<strong>2.<span> </span>What are your roles and responsibilities?</strong></p>
<p><strong></strong>Many companies are working collaboratively with tech providers to explore AI's potential for their business. This, and the inherent complexity of AI systems, mean that the usual analysis of whether parties are acting as controller or processor becomes a lot more challenging. You should be clear about who is responsible for making decisions about different aspects of the AI model (including if any are made jointly) and at which stage of the AI lifecycle (e.g. pre-training vs further training and configuration). This will then drive your obligations under the law and the contractual framework you should put in place to document the rights and obligations of the parties e.g. data processing agreements and data sharing agreements. <br />
<br />
<strong>3.<span> </span>What will be used as input data?</strong></p>
<p><strong></strong>AI systems need vast amounts of data for training purposes. You will need to consider the various data sets that will be input into your AI system (as part of training, configuration or regular use) and if you are compliant in respect of each of them. How much personal data and special category data exists in those data sets? Are you able to anonymise data before it is ingested into the system (thereby taking it outside the scope of the GDPR) or will this significantly impact the accuracy of the model? Are you able to use synthetic data sets to train the system instead as these present lower privacy risk? Bear in mind, however, that lower quality data will have an impact on the accuracy of the AI system. Whilst AI systems need not be 100% accurate, they should not be statistically flawed. Poor data may also impact fairness – discussed further below.   <br />
<br />
<strong>4.<span> </span>What will be the output data?</strong></p>
<p><strong></strong>Consider the types of data that will be generated by your AI system, and your GDPR obligations in respect of each of them. The power of AI systems to analyse and find patterns across massive data sets also means that you may be processing personal data without expecting to do so. For example, where you use AI systems to make inferences about individuals or groups, and the inference relates to an identifiable person, this may be personal data or even special category data depending on the circumstances.   <br />
<br />
<strong>5.<span> </span>What are the data flows?</strong></p>
<p><strong></strong>AI models require extensive processing power and are therefore typically hosted by the AI solution provider. Consider how personal data moves through your AI system. Is it sent from your systems to the provider, and if so is it commingled with other customers' data, or are you able to retain it in your own 'walled garden' instance of the AI model? Is it transferred out of the UK? If so you will need to ensure that a transfer mechanism under the GDPR has been put in place and a transfer risk assessment has been carried out. Note that the risks of any transfer are heightened because of the volumes of data processed by the AI system. Consider also how long you retain data and if this aligns with your data retention policies. <br />
<br />
<strong>6.<span> </span>How do you ensure your AI system is fair?</strong></p>
<p><strong></strong>Your AI systems (and any decisions made by them) must be fair and must not produce outcomes which are discriminatory or biased against individuals. Bias can occur at multiple points in the AI lifecycle and may not be apparent. For example, data sets that reflect historical biases or lack data for certain categories of data subjects may result in AI models being inadvertently trained to perpetuate bias. Therefore, you must plan to mitigate the risk of bias from the outset, for example, assessing the quality and neutrality of data inputs, engaging with a broad range of stakeholders to identify bias, and mapping out the potential effects of AI decisions on minority groups.<br />
<br />
<strong>7.<span> </span>Will you be carrying out automated decision-making?</strong></p>
<p><strong></strong>Article 22 of the GDPR restricts automated decision-making that produces legal or similarly significant effects for data subjects without any meaningful human involvement. If your AI system is likely to do this, you will need to assess the decision being made and how human involvement should be incorporated into the process for it to have meaningful effect. EU case law (that is influential on the UK regulator) also shows that "decision" may be interpreted broadly and can encompass even interim acts that play a determining role in the final decision. You should also ensure that any employees involved in the decision understand the importance of their review and that it is not merely a 'tick box' exercise. <br />
<br />
<strong>8.<span> </span>How do you keep your AI system secure? </strong></p>
<p><strong></strong>The sheer volumes of data used by any AI system exponentially increases the risk of a data breach. Any existing technical and organisational measures you implement to keep your systems secure must be updated to protect against novel security risks faced by AI systems (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/procuring-ai-commercial-considerations-checklist/" target="_blank">here</a> for examples of these). You will need to adopt a more holistic approach to systems security as your AI solution will no doubt be integrated with various internal and third party systems. As industry standards around AI security are still being developed, ensure you remain current with the prevailing best practice from time to time.  <br />
<br />
<strong>9.<span> </span>What must you tell data subjects?</strong></p>
<p><strong></strong>You will need to be transparent with data subjects as to how their personal data is processed by your AI systems. This may be difficult to do, as AI decision-making is sophisticated and frequently opaque. For this reason, you must deliberately design your AI system to be explainable and understandable by your data subjects and you should describe how the decisions made by your AI system are fair and avoid bias. Consider producing an explainability statement alongside your privacy policy to set out this information. <br />
<br />
<strong>10. How do you ensure data subject rights?</strong></p>
<p>Your AI system must be designed to accommodate the data subject rights enshrined in the GDPR throughout the project lifecycle, for example, the rights to access, rectification, and erasure. This may be challenging depending on the volume of data processed, and if data sets are modified or commingled for training purposes. However, you would still need to take reasonable steps to comply with the data subject's request. Similarly, if there are challenges in embedding data subjects' right to withdraw consent (due to the time and effort needed to 'untrain' models) then you must consider whether consent is a feasible lawful basis to rely on in the first place. </p>
<p><span>In addition to asking yourself these 10 questions, ensure you follow guidance produced by your relevant data protection regulators. In the UK, the Information Commissioner has produced various sources of guidance on developing, deploying and using AI, including the "<a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/">AI and data protection risk toolkit</a>", guidance on <a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/">explaining decisions made with AI</a> and the ICO's response to its generative AI consultation. The ICO has also promised a 'single set of rules' on AI and data protection which would include details on areas that could not be covered in the consultation. In respect of the EU GDPR, the European Data Protection Board has produced an <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-282024-certain-data-protection-aspects_en">opinion on AI models</a> and a <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/support-pool-experts-projects/ai-privacy-risks-mitigations-large_en">risk management methodology</a> for managing privacy risks. </span></p>
<p>
</p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></description><pubDate>Tue, 10 Jun 2025 15:24:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Jon Bartley</authors:names><content:encoded><![CDATA[<p>UK data protection law applies to AI as it would any other technology. </p>
<p>UK data protection law applies to AI as it would any other technology. Companies therefore need to ensure that they meet GDPR standards when processing personal data in the context of any AI used in their business. Although the over-arching legal framework is the same, the nature of AI technology means that businesses are presented with new privacy-related risks that need to be addressed. We set out in this section 10 key questions to ask yourself at the outset when developing or deploying AI solutions in your business. Additional general considerations for off-the-shelf AI solutions are set out <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/ai-as-a-service-key-issues/" target="_blank">here</a>.  <br />
<br />
<strong>1.<span> </span>What will your AI system do?</strong></p>
<p><strong></strong>It is a basic question – but you need to understand the purpose of your AI system to understand why and how you will be processing personal data in the context of the system. The ICO considers AI-related processing to be high risk and therefore a data protection impact assessment (DPIA) should be carried out to assess the privacy-related risks posed by your system and your proposed mitigations. </p>
<p>Where you are exploring the use of AI or engaging in pilot projects, the purpose of your AI system may not be apparent or may change over time. Similarly, your purposes for processing personal data may change over the lifecycle of the AI system, for example, training the system vs using the system to make decisions. At the same time, you need to consider the lawful basis for processing data in your AI system, at each stage of the AI system, and for each category of personal data processed by your AI system. This, too, can change as your AI system evolves such that you may need a different lawful basis or, if you were relying on the legitimate interests lawful basis, you may now need to perform a new legitimate interests assessment. Updating your DPIA regularly is a good way to capture changes and assess the impact of such changes on your compliance. </p>
<p>You should also consider whether the data processing carried out by your system is necessary and proportionate, or if there are potentially more privacy-preserving alternatives. Ultimately, your system should incorporate data protection by design and by default.<br />
<br />
<strong>2.<span> </span>What are your roles and responsibilities?</strong></p>
<p><strong></strong>Many companies are working collaboratively with tech providers to explore AI's potential for their business. This, and the inherent complexity of AI systems, mean that the usual analysis of whether parties are acting as controller or processor becomes a lot more challenging. You should be clear about who is responsible for making decisions about different aspects of the AI model (including if any are made jointly) and at which stage of the AI lifecycle (e.g. pre-training vs further training and configuration). This will then drive your obligations under the law and the contractual framework you should put in place to document the rights and obligations of the parties e.g. data processing agreements and data sharing agreements. <br />
<br />
<strong>3.<span> </span>What will be used as input data?</strong></p>
<p><strong></strong>AI systems need vast amounts of data for training purposes. You will need to consider the various data sets that will be input into your AI system (as part of training, configuration or regular use) and if you are compliant in respect of each of them. How much personal data and special category data exists in those data sets? Are you able to anonymise data before it is ingested into the system (thereby taking it outside the scope of the GDPR) or will this significantly impact the accuracy of the model? Are you able to use synthetic data sets to train the system instead as these present lower privacy risk? Bear in mind, however, that lower quality data will have an impact on the accuracy of the AI system. Whilst AI systems need not be 100% accurate, they should not be statistically flawed. Poor data may also impact fairness – discussed further below.   <br />
<br />
<strong>4.<span> </span>What will be the output data?</strong></p>
<p><strong></strong>Consider the types of data that will be generated by your AI system, and your GDPR obligations in respect of each of them. The power of AI systems to analyse and find patterns across massive data sets also means that you may be processing personal data without expecting to do so. For example, where you use AI systems to make inferences about individuals or groups, and the inference relates to an identifiable person, this may be personal data or even special category data depending on the circumstances.   <br />
<br />
<strong>5.<span> </span>What are the data flows?</strong></p>
<p><strong></strong>AI models require extensive processing power and are therefore typically hosted by the AI solution provider. Consider how personal data moves through your AI system. Is it sent from your systems to the provider, and if so is it commingled with other customers' data, or are you able to retain it in your own 'walled garden' instance of the AI model? Is it transferred out of the UK? If so you will need to ensure that a transfer mechanism under the GDPR has been put in place and a transfer risk assessment has been carried out. Note that the risks of any transfer are heightened because of the volumes of data processed by the AI system. Consider also how long you retain data and if this aligns with your data retention policies. <br />
<br />
<strong>6.<span> </span>How do you ensure your AI system is fair?</strong></p>
<p><strong></strong>Your AI systems (and any decisions made by them) must be fair and must not produce outcomes which are discriminatory or biased against individuals. Bias can occur at multiple points in the AI lifecycle and may not be apparent. For example, data sets that reflect historical biases or lack data for certain categories of data subjects may result in AI models being inadvertently trained to perpetuate bias. Therefore, you must plan to mitigate the risk of bias from the outset, for example, assessing the quality and neutrality of data inputs, engaging with a broad range of stakeholders to identify bias, and mapping out the potential effects of AI decisions on minority groups.<br />
<br />
<strong>7.<span> </span>Will you be carrying out automated decision-making?</strong></p>
<p><strong></strong>Article 22 of the GDPR restricts automated decision-making that produces legal or similarly significant effects for data subjects without any meaningful human involvement. If your AI system is likely to do this, you will need to assess the decision being made and how human involvement should be incorporated into the process for it to have meaningful effect. EU case law (that is influential on the UK regulator) also shows that "decision" may be interpreted broadly and can encompass even interim acts that play a determining role in the final decision. You should also ensure that any employees involved in the decision understand the importance of their review and that it is not merely a 'tick box' exercise. <br />
<br />
<strong>8.<span> </span>How do you keep your AI system secure? </strong></p>
<p><strong></strong>The sheer volumes of data used by any AI system exponentially increases the risk of a data breach. Any existing technical and organisational measures you implement to keep your systems secure must be updated to protect against novel security risks faced by AI systems (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/procuring-ai-commercial-considerations-checklist/" target="_blank">here</a> for examples of these). You will need to adopt a more holistic approach to systems security as your AI solution will no doubt be integrated with various internal and third party systems. As industry standards around AI security are still being developed, ensure you remain current with the prevailing best practice from time to time.  <br />
<br />
<strong>9.<span> </span>What must you tell data subjects?</strong></p>
<p><strong></strong>You will need to be transparent with data subjects as to how their personal data is processed by your AI systems. This may be difficult to do, as AI decision-making is sophisticated and frequently opaque. For this reason, you must deliberately design your AI system to be explainable and understandable by your data subjects and you should describe how the decisions made by your AI system are fair and avoid bias. Consider producing an explainability statement alongside your privacy policy to set out this information. <br />
<br />
<strong>10. How do you ensure data subject rights?</strong></p>
<p>Your AI system must be designed to accommodate the data subject rights enshrined in the GDPR throughout the project lifecycle, for example, the rights to access, rectification, and erasure. This may be challenging depending on the volume of data processed, and if data sets are modified or commingled for training purposes. However, you would still need to take reasonable steps to comply with the data subject's request. Similarly, if there are challenges in embedding data subjects' right to withdraw consent (due to the time and effort needed to 'untrain' models) then you must consider whether consent is a feasible lawful basis to rely on in the first place. </p>
<p><span>In addition to asking yourself these 10 questions, ensure you follow guidance produced by your relevant data protection regulators. In the UK, the Information Commissioner has produced various sources of guidance on developing, deploying and using AI, including the "<a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/">AI and data protection risk toolkit</a>", guidance on <a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/">explaining decisions made with AI</a> and the ICO's response to its generative AI consultation. The ICO has also promised a 'single set of rules' on AI and data protection which would include details on areas that could not be covered in the consultation. In respect of the EU GDPR, the European Data Protection Board has produced an <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-282024-certain-data-protection-aspects_en">opinion on AI models</a> and a <a href="https://www.edpb.europa.eu/our-work-tools/our-documents/support-pool-experts-projects/ai-privacy-risks-mitigations-large_en">risk management methodology</a> for managing privacy risks. </span></p>
<p>
</p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{F9AEF83C-0578-4D3F-86E7-6F41E5876C8E}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/</link><title>The Ethics of AI - The Digital Dilemma</title><description><![CDATA[<p>The possible benefits of AI and the ways in which it may transform a wide range of sectors is already evident.  However, along with more powerful AI comes new or heightened risks.  Given these risks, ethical principles are necessary to guide the development and deployment of AI systems in a manner that maximises their benefits while minimising harm. </p>
<p>Ethical principles tailored to the responsible design and use of AI systems have been put forward by a number of different organisations across the globe.  Whilst there is not a single agreed version of these ethical principles, they tend to follow similar themes and highlight a number of potential harms which should be considered in the development and use of AI tools. </p>
<p><strong>The "starting five" principles</strong></p>
<p>The UK government's AI White Paper published on 29 March 2023 lists five values-focused cross-sectoral principles for regulators to interpret and apply within their respective domains, intended to promote the ethical use of AI: </p>
<ul style="list-style-type: disc;">
    <li><span>safety, security and robustness</span></li>
    <li><span>appropriate transparency and explainability</span></li>
    <li><span>fairness</span></li>
    <li><span>accountability and governance</span></li>
    <li><span>contestability and redress. </span></li>
</ul>
<p>The government confirmed the same principles in their response to the consultation to the AI White Paper which was published on 6 February 2024 (Consultation Response). </p>
<p>These principles are pivotal to the government's approach to regulating AI in the UK.  They provide a framework which the regulators must apply, with the intention of allowing them to do so in a proportionate and agile manner given the level of risk which is determined by where and how AI is used in a particular context. See Part 1 - <a href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/">AI Regulation in the UK</a> for information about the UK's regulatory approach.  </p>
<p>We will explain each of the key principles in turn, noting the unavoidable interplay between the various principles and the possible harms that they seek to address.</p>
<p><strong>Safety, security and robustness</strong>.   AI systems must be safe, and should be designed and deployed, and operate throughout the lifecycle, in a manner that minimises the risk of harm to individuals and society.  These harms could be deliberate or accidental, and may affect individuals, groups, organisations or even nations, and may take various forms, such physical, psychological, social, or economic harms.        </p>
<p>AI systems must <span style="color: #0b0c0c;">be technically secure, and </span>be protected against unauthorised access, manipulation or attacks that could lead to harmful outcomes.  Security measures are crucial to protect against malicious use, such as spreading misinformation, stealing personal data or disrupting critical infrastructure.  <span style="color: #0b0c0c;">Developers should consider the security threats that could apply at each stage of the AI life cycle and embed resilience to these threats into their systems.</span></p>
<p>As for robustness, AI systems should be reliable and perform consistently and as intended under a wide range of conditions. They should be able to handle unexpected situations or inputs without failing or producing erroneous outcomes.  This includes being resilient to changes in their operating environment and being able to recover from errors or failures. </p>
<p><strong>Transparency and explainability</strong>. The principle of transparency refers to the need for information about an AI system to be communicated to relevant stakeholders.  This means that an appropriate level of information about the processes, decisions and operations of an AI system should be accessible and comprehensible, both to the developers and engineers who designed the system but also to the end users and other stakeholders who may be affected by its use.  E<span style="color: #0b0c0c;">xplainability refers to the ability to interpret and understand the decision-making processes of an AI system.  Otherwise, </span>the inability to see how deep-learning systems make decisions creates what is known as the 'black box problem'. Opacity in decision-making is problematic in several ways, including difficulties in diagnosing and fixing issues and its potential to reflect or amplify societal or dataset biases without the business deploying the AI knowing. In practice, however, explainability may be easier said than done in some cases, as the logic and decision-making in AI systems cannot always be meaningfully explained in a way that can be understood by humans.  This could involve simplifying complex models, using visualisations or providing simplified rules that approximate the model's decision making-process.</p>
<p style="margin-left: 40px;">Transparency and explainability should be proportionate to the risks that an AI system may present, but in any event is necessary to afford regulators and stakeholders sufficient visibility of, and information about, an AI system and its inputs and outputs to give effect to the other ethical principles (for example, for regulators to identify accountability and for individuals who may have been affected by an AI decision to challenge the decision and seek redress if necessary). </p>
<p><strong>Fairness</strong>.  Fairness includes identifying and mitigating biases to prevent discriminatory outcomes caused by AI systems, and ensuring the use of AI systems does not undermine the legal rights of individuals or organisations.  In order to do so, fairness needs to be considered in every aspect of AI. This would include:</p>
<ol style="list-style-type: disc;">
    <li><span>data fairness – AI systems can inadvertently learn and in turn perpetuate or amplify societal biases through biased training data or algorithmic design, and so only fair and equitable datasets should be used, or training examples should be re-weighted if required</span></li>
    <li>design fairness – using reasonable features, processes, and analytical structures in the AI architecture</li>
    <li>outcome fairness - preventing the system from having any discriminatory impact</li>
    <li>implementation fairness - implementing the system in an unbiased way.</li>
</ol>
<p>Ensuring fairness in AI also involves adhering to legal standards and ethical norms.  <span style="color: #0b0c0c;">Fairness is a concept which underpins many areas of law and regulation, such equality and human rights, data protection, consumer and competition law, and many sector-specific regulatory requirements (for instance the consumer duty and other consumer protections in the financial services sector).</span> This means that AI systems should comply with anti-discrimination laws and ethical guidelines to promote justice and prevent harm.</p>
<p style="margin: 15pt 0cm;"><strong>Accountability and governance</strong>. It is crucial to have clear lines of responsibility for the decisions made by an AI system, in order to be able to identify and hold responsible the relevant parties for the decisions and any harm or unintended consequences that arise as a result of the use of AI systems.  This is essential <span style="color: #0b0c0c;">for creating business certainty (such as allocating liability in an AI supply chain) while also ensuring regulatory compliance.  </span> </p>
<p style="margin: 15pt 0cm;">Appropriate governance frameworks should be in place to oversee the supply and use of AI systems, incorporating ethical guidelines and standards for AI development and usage.  <span style="color: #0b0c0c;">Assurance techniques such as impact assessments may assist in identifying risks early in the development life cycle, which can in turn be mitigated through appropriate safeguards and governance mechanisms.  </span></p>
<p style="margin: 15pt 0cm;"><span style="color: #0b0c0c;">Once in use, r</span>egular auditing of AI systems to ensure they operate as intended and adhere to the required standards and in compliance with the ever-changing regulations and standards can also be useful.  Engaging with a wide range of stakeholders (experts but also those potentially impacted by AI systems) can also help to shape robust ethical AI governance, identify and avoid potential ethical issues, and spot opportunities to improve ethical standards and practices in AI. Retaining a "human in the loop", by using AI in such a way that it does not replace human judgement and decision making, but rather augments it, is also vital.  </p>
<p style="margin: 15pt 0cm;">Ethical AI governance should very much be seen as an ongoing process; as AI technologies and their societal impacts evolve, governance frameworks should also adapt.  </p>
<p><strong>Contestability and redress</strong>.<strong>  </strong>This principle of contestability refers to the ability of users and affected parties to challenge and seek rectification for decisions made by AI systems that impact them.  This is particularly important when these decisions have significant consequences on people's lives.  For AI decisions to be contestable, the systems need to be transparent about how decisions are made.  Mechanisms through which challenges can be made also need to be provided. This could include user interfaces for feedback, human oversight where decisions can be reviewed and clear processes for escalating concerns.<br />
<br />
Redress involves correcting wrong decisions and where necessary providing avenues for affected individuals to seek compensation or other remedies in cases where the individual believes they have been unfairly treated by an AI system or it otherwise causes harm.  Beyond addressing individual grievances, redress also involves taking feedback and challenges to improve the AI system.  This could mean retraining models with more diverse data, adjusting algorithms to eliminate biases, or refining decision-making processes.  Effective redress mechanisms are usually supported by robust policy and legal frameworks that define the rights of individuals and the obligations of AI developers and deployers.  These frameworks can provide guidelines for the types of redress available and the procedures for seeking it.</p>
<p>The UK’s non-statutory approach to date has meant that new rights or new routes to redress have not been implemented and it's unclear whether the proposed AI Bill will include any. See <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/" target="_blank">Part 1 - AI Regulation in the UK</a> for more details. </p>
<p><strong>Moving forward</strong></p>
<p> <span>Implementing these ethical principles is complex and multifaceted, particularly in the face of a regulatory regime that is still taking shape; it requires ongoing effort, multidisciplinary collaboration and continuous evaluation and adaptation of AI systems as technology and societal norms evolve.</span></p>
<p>
</p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></description><pubDate>Tue, 10 Jun 2025 15:19:00 +0100</pubDate><category>Artificial intelligence</category><authors:names></authors:names><content:encoded><![CDATA[<p>The possible benefits of AI and the ways in which it may transform a wide range of sectors is already evident.  However, along with more powerful AI comes new or heightened risks.  Given these risks, ethical principles are necessary to guide the development and deployment of AI systems in a manner that maximises their benefits while minimising harm. </p>
<p>Ethical principles tailored to the responsible design and use of AI systems have been put forward by a number of different organisations across the globe.  Whilst there is not a single agreed version of these ethical principles, they tend to follow similar themes and highlight a number of potential harms which should be considered in the development and use of AI tools. </p>
<p><strong>The "starting five" principles</strong></p>
<p>The UK government's AI White Paper published on 29 March 2023 lists five values-focused cross-sectoral principles for regulators to interpret and apply within their respective domains, intended to promote the ethical use of AI: </p>
<ul style="list-style-type: disc;">
    <li><span>safety, security and robustness</span></li>
    <li><span>appropriate transparency and explainability</span></li>
    <li><span>fairness</span></li>
    <li><span>accountability and governance</span></li>
    <li><span>contestability and redress. </span></li>
</ul>
<p>The government confirmed the same principles in their response to the consultation to the AI White Paper which was published on 6 February 2024 (Consultation Response). </p>
<p>These principles are pivotal to the government's approach to regulating AI in the UK.  They provide a framework which the regulators must apply, with the intention of allowing them to do so in a proportionate and agile manner given the level of risk which is determined by where and how AI is used in a particular context. See Part 1 - <a href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/">AI Regulation in the UK</a> for information about the UK's regulatory approach.  </p>
<p>We will explain each of the key principles in turn, noting the unavoidable interplay between the various principles and the possible harms that they seek to address.</p>
<p><strong>Safety, security and robustness</strong>.   AI systems must be safe, and should be designed and deployed, and operate throughout the lifecycle, in a manner that minimises the risk of harm to individuals and society.  These harms could be deliberate or accidental, and may affect individuals, groups, organisations or even nations, and may take various forms, such physical, psychological, social, or economic harms.        </p>
<p>AI systems must <span style="color: #0b0c0c;">be technically secure, and </span>be protected against unauthorised access, manipulation or attacks that could lead to harmful outcomes.  Security measures are crucial to protect against malicious use, such as spreading misinformation, stealing personal data or disrupting critical infrastructure.  <span style="color: #0b0c0c;">Developers should consider the security threats that could apply at each stage of the AI life cycle and embed resilience to these threats into their systems.</span></p>
<p>As for robustness, AI systems should be reliable and perform consistently and as intended under a wide range of conditions. They should be able to handle unexpected situations or inputs without failing or producing erroneous outcomes.  This includes being resilient to changes in their operating environment and being able to recover from errors or failures. </p>
<p><strong>Transparency and explainability</strong>. The principle of transparency refers to the need for information about an AI system to be communicated to relevant stakeholders.  This means that an appropriate level of information about the processes, decisions and operations of an AI system should be accessible and comprehensible, both to the developers and engineers who designed the system but also to the end users and other stakeholders who may be affected by its use.  E<span style="color: #0b0c0c;">xplainability refers to the ability to interpret and understand the decision-making processes of an AI system.  Otherwise, </span>the inability to see how deep-learning systems make decisions creates what is known as the 'black box problem'. Opacity in decision-making is problematic in several ways, including difficulties in diagnosing and fixing issues and its potential to reflect or amplify societal or dataset biases without the business deploying the AI knowing. In practice, however, explainability may be easier said than done in some cases, as the logic and decision-making in AI systems cannot always be meaningfully explained in a way that can be understood by humans.  This could involve simplifying complex models, using visualisations or providing simplified rules that approximate the model's decision making-process.</p>
<p style="margin-left: 40px;">Transparency and explainability should be proportionate to the risks that an AI system may present, but in any event is necessary to afford regulators and stakeholders sufficient visibility of, and information about, an AI system and its inputs and outputs to give effect to the other ethical principles (for example, for regulators to identify accountability and for individuals who may have been affected by an AI decision to challenge the decision and seek redress if necessary). </p>
<p><strong>Fairness</strong>.  Fairness includes identifying and mitigating biases to prevent discriminatory outcomes caused by AI systems, and ensuring the use of AI systems does not undermine the legal rights of individuals or organisations.  In order to do so, fairness needs to be considered in every aspect of AI. This would include:</p>
<ol style="list-style-type: disc;">
    <li><span>data fairness – AI systems can inadvertently learn and in turn perpetuate or amplify societal biases through biased training data or algorithmic design, and so only fair and equitable datasets should be used, or training examples should be re-weighted if required</span></li>
    <li>design fairness – using reasonable features, processes, and analytical structures in the AI architecture</li>
    <li>outcome fairness - preventing the system from having any discriminatory impact</li>
    <li>implementation fairness - implementing the system in an unbiased way.</li>
</ol>
<p>Ensuring fairness in AI also involves adhering to legal standards and ethical norms.  <span style="color: #0b0c0c;">Fairness is a concept which underpins many areas of law and regulation, such equality and human rights, data protection, consumer and competition law, and many sector-specific regulatory requirements (for instance the consumer duty and other consumer protections in the financial services sector).</span> This means that AI systems should comply with anti-discrimination laws and ethical guidelines to promote justice and prevent harm.</p>
<p style="margin: 15pt 0cm;"><strong>Accountability and governance</strong>. It is crucial to have clear lines of responsibility for the decisions made by an AI system, in order to be able to identify and hold responsible the relevant parties for the decisions and any harm or unintended consequences that arise as a result of the use of AI systems.  This is essential <span style="color: #0b0c0c;">for creating business certainty (such as allocating liability in an AI supply chain) while also ensuring regulatory compliance.  </span> </p>
<p style="margin: 15pt 0cm;">Appropriate governance frameworks should be in place to oversee the supply and use of AI systems, incorporating ethical guidelines and standards for AI development and usage.  <span style="color: #0b0c0c;">Assurance techniques such as impact assessments may assist in identifying risks early in the development life cycle, which can in turn be mitigated through appropriate safeguards and governance mechanisms.  </span></p>
<p style="margin: 15pt 0cm;"><span style="color: #0b0c0c;">Once in use, r</span>egular auditing of AI systems to ensure they operate as intended and adhere to the required standards and in compliance with the ever-changing regulations and standards can also be useful.  Engaging with a wide range of stakeholders (experts but also those potentially impacted by AI systems) can also help to shape robust ethical AI governance, identify and avoid potential ethical issues, and spot opportunities to improve ethical standards and practices in AI. Retaining a "human in the loop", by using AI in such a way that it does not replace human judgement and decision making, but rather augments it, is also vital.  </p>
<p style="margin: 15pt 0cm;">Ethical AI governance should very much be seen as an ongoing process; as AI technologies and their societal impacts evolve, governance frameworks should also adapt.  </p>
<p><strong>Contestability and redress</strong>.<strong>  </strong>This principle of contestability refers to the ability of users and affected parties to challenge and seek rectification for decisions made by AI systems that impact them.  This is particularly important when these decisions have significant consequences on people's lives.  For AI decisions to be contestable, the systems need to be transparent about how decisions are made.  Mechanisms through which challenges can be made also need to be provided. This could include user interfaces for feedback, human oversight where decisions can be reviewed and clear processes for escalating concerns.<br />
<br />
Redress involves correcting wrong decisions and where necessary providing avenues for affected individuals to seek compensation or other remedies in cases where the individual believes they have been unfairly treated by an AI system or it otherwise causes harm.  Beyond addressing individual grievances, redress also involves taking feedback and challenges to improve the AI system.  This could mean retraining models with more diverse data, adjusting algorithms to eliminate biases, or refining decision-making processes.  Effective redress mechanisms are usually supported by robust policy and legal frameworks that define the rights of individuals and the obligations of AI developers and deployers.  These frameworks can provide guidelines for the types of redress available and the procedures for seeking it.</p>
<p>The UK’s non-statutory approach to date has meant that new rights or new routes to redress have not been implemented and it's unclear whether the proposed AI Bill will include any. See <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/" target="_blank">Part 1 - AI Regulation in the UK</a> for more details. </p>
<p><strong>Moving forward</strong></p>
<p> <span>Implementing these ethical principles is complex and multifaceted, particularly in the face of a regulatory regime that is still taking shape; it requires ongoing effort, multidisciplinary collaboration and continuous evaluation and adaptation of AI systems as technology and societal norms evolve.</span></p>
<p>
</p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{F48AFDF9-64FA-4005-8FE9-B3D7E65145E3}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-6-practical-considerations/</link><title>Part 6 – Practical Considerations</title><description><![CDATA[<p style="margin-bottom: 0cm;"><em><span>This is Part 6 of 'Regulation of AI'</span></em></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>AI providers have been focussing on their forthcoming AI obligations and on governance for some time, but it is now prudent for the majority of organisations to assess how their use of AI will come within the scope of regulation in key territories, become familiar with each regime, and devise a means to keep up with anticipated changes. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>A plan for action that will be applicable for most businesses includes:</span></p>
<ul style="margin-top: 0cm; list-style-type: disc;">
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>building in compliance costs - the approach to AI regulation across jurisdictions currently appears so varied that organisations need to factor the costs of compliance into their strategy for the jurisdictions that they plan to provide or deploy AI in; </span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>implementing AI governance including systems and procedures for data retention and record keeping;</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>assessing existing product and service lines and removing or adjusting products or services that use AI in a way that is prohibited or high risk, especially in the EU;</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>identifying trusted advisors from the "noise" of what is being offered externally; and</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>building internal AI expertise including by providing training to allow individuals to perform their roles and/or use the AI system in a way that is consistent with related policies and procedures - see <a href="https://www.rpclegal.com/thinking/tech/six-steps-to-ai-literacy/"><span style="color: blue;">here</span></a> for our recommendations on training your staff on AI. </span></li>
</ul>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>In addition, AI providers should establish written policies, procedures, and instructions for various aspects of the AI system (including oversight of the system) and produce documentation explaining the technicalities of their AI model and its output. They should assess and document the likelihood and impact of any risks associated with the AI system, including in relation to privacy and security.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>Where appropriate businesses might consider using voluntary commitments in their relevant industry sector.  In December 2023, in the US, 28 healthcare companies agreed to voluntary commitments on the use and purchase of safe, secure and trustworthy AI. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>Lastly, as discussed in <a href="https://www.rpclegal.com/ai-guide/"><span style="color: blue;">Part 5 – AI regulation globally</span></a>, ISO/IEC standards (such as ISO 23894 or ISO/IEC 4200 1:2023) can be used as tools to support the safety, security and resilience of AI systems and solutions.</span></p>]]></description><pubDate>Tue, 10 Jun 2025 12:20:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Caroline Tuck, Ricky Cella</authors:names><content:encoded><![CDATA[<p style="margin-bottom: 0cm;"><em><span>This is Part 6 of 'Regulation of AI'</span></em></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>AI providers have been focussing on their forthcoming AI obligations and on governance for some time, but it is now prudent for the majority of organisations to assess how their use of AI will come within the scope of regulation in key territories, become familiar with each regime, and devise a means to keep up with anticipated changes. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>A plan for action that will be applicable for most businesses includes:</span></p>
<ul style="margin-top: 0cm; list-style-type: disc;">
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>building in compliance costs - the approach to AI regulation across jurisdictions currently appears so varied that organisations need to factor the costs of compliance into their strategy for the jurisdictions that they plan to provide or deploy AI in; </span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>implementing AI governance including systems and procedures for data retention and record keeping;</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>assessing existing product and service lines and removing or adjusting products or services that use AI in a way that is prohibited or high risk, especially in the EU;</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>identifying trusted advisors from the "noise" of what is being offered externally; and</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>building internal AI expertise including by providing training to allow individuals to perform their roles and/or use the AI system in a way that is consistent with related policies and procedures - see <a href="https://www.rpclegal.com/thinking/tech/six-steps-to-ai-literacy/"><span style="color: blue;">here</span></a> for our recommendations on training your staff on AI. </span></li>
</ul>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>In addition, AI providers should establish written policies, procedures, and instructions for various aspects of the AI system (including oversight of the system) and produce documentation explaining the technicalities of their AI model and its output. They should assess and document the likelihood and impact of any risks associated with the AI system, including in relation to privacy and security.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>Where appropriate businesses might consider using voluntary commitments in their relevant industry sector.  In December 2023, in the US, 28 healthcare companies agreed to voluntary commitments on the use and purchase of safe, secure and trustworthy AI. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>Lastly, as discussed in <a href="https://www.rpclegal.com/ai-guide/"><span style="color: blue;">Part 5 – AI regulation globally</span></a>, ISO/IEC standards (such as ISO 23894 or ISO/IEC 4200 1:2023) can be used as tools to support the safety, security and resilience of AI systems and solutions.</span></p>]]></content:encoded></item><item><guid isPermaLink="false">{5A0FF5DB-9108-43D7-BF7F-E743DCACC484}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-5-ai-regulation-globally/</link><title>Part 5 – AI Regulation Globally</title><description><![CDATA[<div>
<p style="margin-bottom: 0cm;"><span>This is Part 5 of 'Regulation of AI'</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>There have been various initiatives for countries around the world to cooperate on AI regulation, including knowledge sharing and securing commitments from tech providers.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><strong><span>International agreements</span></strong></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>Currently, there is only one legally-binding international treaty – the Council of Europe's convention on AI. This treaty, signed by the US, EU and UK on 5 September 2024, creates a common framework for AI systems with three over-arching safeguards:</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<ul style="margin-top: 0cm; list-style-type: disc;">
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected and AI does not discriminate against them</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>protecting democracy by ensuring countries take steps to prevent public institutions and processes being undermined</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>protecting the rule of law, by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harms and ensure it is used safely</span></li>
</ul>
<p style="margin-bottom: 0cm;"><strong><span> </span></strong></p>
<p style="margin-bottom: 0cm;"><span>On 30 October 2023 the G7 published its international guiding principles on AI, in addition to a voluntary code of conduct for AI developers. The G7 principles are a non-exhaustive list of guiding principles aimed at promoting safe, secure and trustworthy AI and are intended to build on the OECD's AI Principles, adopted back in May 2019.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><strong><span>Global summits</span></strong></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>There have been three global AI summits. In November 2023, the UK Government hosted the first – titled the AI Safety Summit. The summit brought together representatives from governments, AI companies, research experts and civil society groups from across the globe, with the stated aims of considering the risk of AI and discussing how they can be mitigated through internationally co-ordinated action. One output from the UK's AI Safety Summit was the Bletchley Declaration focused on international collaboration on identifying AI safety risks and creating risk-based policies to address such risks. Another output was an agreement between senior government representatives from leading AI nations and major AI developers and organisations (including Meta, Google DeepMind and OpenAI) to a plan for safety testing of frontier AI models. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>The second global AI summit was held in Seoul in May 2024. The institutes of 10 countries and the EU signed the Seoul Declaration with commitments to cooperate more between themselves and via organisations such as the UN, G7, G20 and OECD, while sixteen AI firms made voluntary safety commitments. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>In February 2025, the AI Action Summit was held in Paris. Over 1000 participants from over 100 countries attended the summit which focused on the key themes of inclusive and environmentally-sustainable AI. The Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet was signed by 60 countries but not the US or UK.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><strong><span>Standards</span></strong></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>AI-related standards have been published by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IE). In its response to the white paper, the UK government mentions specifically the importance of engaging with global standards development organisations such as the ISO and IEC. The most prominent AI ISO/IEC standards are:</span></p>
<ol style="margin-top: 0cm;">
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC TR 24028:2020 that analyses the factors that can impact the trustworthiness of systems providing or using AI </span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC TR 24368:2022 on the ethical and societal concerns surrounding AI</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC 23894:2023 which offers strategic guidance to organisations across all sectors for managing risks connected to the development and use of AI. It also provides guidance on how organisations can integrate risk management into their AI-driven activities and business functions</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC 42001:2023 which specifies requirements for establishing, implementing, maintaining, and continually improving AI management systems within organisations</span></li>
</ol>
<p style="margin: 0cm 0cm 0cm 36pt;"><span> </span></p>
<p style="margin-top: 0cm; margin-bottom: 0cm;"><span>In addition, relevant standards have also been published by: (i) the British Standards Institution including PD CEN/CLC TR 18145:2025 which provides guidance on sustainable AI technologies; and (ii) the Institute of Electrical and Electronic Engineers including IEEE 3119-2025 on Procurement of AI and Automated Decision Systems. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
</div>]]></description><pubDate>Tue, 10 Jun 2025 11:46:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Caroline Tuck, Ricky Cella</authors:names><content:encoded><![CDATA[<div>
<p style="margin-bottom: 0cm;"><span>This is Part 5 of 'Regulation of AI'</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>There have been various initiatives for countries around the world to cooperate on AI regulation, including knowledge sharing and securing commitments from tech providers.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><strong><span>International agreements</span></strong></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>Currently, there is only one legally-binding international treaty – the Council of Europe's convention on AI. This treaty, signed by the US, EU and UK on 5 September 2024, creates a common framework for AI systems with three over-arching safeguards:</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<ul style="margin-top: 0cm; list-style-type: disc;">
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected and AI does not discriminate against them</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>protecting democracy by ensuring countries take steps to prevent public institutions and processes being undermined</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>protecting the rule of law, by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harms and ensure it is used safely</span></li>
</ul>
<p style="margin-bottom: 0cm;"><strong><span> </span></strong></p>
<p style="margin-bottom: 0cm;"><span>On 30 October 2023 the G7 published its international guiding principles on AI, in addition to a voluntary code of conduct for AI developers. The G7 principles are a non-exhaustive list of guiding principles aimed at promoting safe, secure and trustworthy AI and are intended to build on the OECD's AI Principles, adopted back in May 2019.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><strong><span>Global summits</span></strong></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>There have been three global AI summits. In November 2023, the UK Government hosted the first – titled the AI Safety Summit. The summit brought together representatives from governments, AI companies, research experts and civil society groups from across the globe, with the stated aims of considering the risk of AI and discussing how they can be mitigated through internationally co-ordinated action. One output from the UK's AI Safety Summit was the Bletchley Declaration focused on international collaboration on identifying AI safety risks and creating risk-based policies to address such risks. Another output was an agreement between senior government representatives from leading AI nations and major AI developers and organisations (including Meta, Google DeepMind and OpenAI) to a plan for safety testing of frontier AI models. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>The second global AI summit was held in Seoul in May 2024. The institutes of 10 countries and the EU signed the Seoul Declaration with commitments to cooperate more between themselves and via organisations such as the UN, G7, G20 and OECD, while sixteen AI firms made voluntary safety commitments. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>In February 2025, the AI Action Summit was held in Paris. Over 1000 participants from over 100 countries attended the summit which focused on the key themes of inclusive and environmentally-sustainable AI. The Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet was signed by 60 countries but not the US or UK.</span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><strong><span>Standards</span></strong></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span>AI-related standards have been published by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IE). In its response to the white paper, the UK government mentions specifically the importance of engaging with global standards development organisations such as the ISO and IEC. The most prominent AI ISO/IEC standards are:</span></p>
<ol style="margin-top: 0cm;">
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC TR 24028:2020 that analyses the factors that can impact the trustworthiness of systems providing or using AI </span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC TR 24368:2022 on the ethical and societal concerns surrounding AI</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC 23894:2023 which offers strategic guidance to organisations across all sectors for managing risks connected to the development and use of AI. It also provides guidance on how organisations can integrate risk management into their AI-driven activities and business functions</span></li>
    <li style="margin-top: 0cm; margin-bottom: 0cm;"><span>ISO/IEC 42001:2023 which specifies requirements for establishing, implementing, maintaining, and continually improving AI management systems within organisations</span></li>
</ol>
<p style="margin: 0cm 0cm 0cm 36pt;"><span> </span></p>
<p style="margin-top: 0cm; margin-bottom: 0cm;"><span>In addition, relevant standards have also been published by: (i) the British Standards Institution including PD CEN/CLC TR 18145:2025 which provides guidance on sustainable AI technologies; and (ii) the Institute of Electrical and Electronic Engineers including IEEE 3119-2025 on Procurement of AI and Automated Decision Systems. </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
<p style="margin-bottom: 0cm;"><span> </span></p>
</div>]]></content:encoded></item><item><guid isPermaLink="false">{09874A0F-4596-455F-A9E1-1F7F02303AF8}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/ai-as-a-service-key-issues/</link><title>AI-as-a-service – key issues</title><description><![CDATA[<p>Artificial Intelligence-as-a-Service (AIaaS), in the same vein as Software-as-a-Service and Infrastructure-as-a-Service, refers to cloud-based tools that allow businesses to gain access to an AI model hosted by a third party provider. As developing many AI tools from scratch is prohibitively expensive, the majority of AI solutions procured by businesses will involve some element of "as-a-Service" (i.e. using models built and hosted by third parties) although the extent of development and configuration overlaid on that will differ.  We have considered the commercial issues involved when procuring such AI solutions in <a href="/thinking/artificial-intelligence/ai-guide/procuring-ai-commercial-considerations-checklist/">Procuring AI – Commercial Considerations Checklist</a>. In this section we consider the issues that arise when procuring off-the-shelf AI tools typically provided by third parties on a one-to-many subscription-based model.</p>
<p><strong>Types of AIaaS</strong></p>
<p>The most common types of AIaaS are:</p>
<ul>
    <li><strong>Chatbots</strong>. Prior to the advancements in AI, chatbots provided answers to a predefined set of questions. However, chatbots are now powered by LLMs and, using natural language processing, can closely mimic actual human speech and deal with a wider array of issues.</li>
    <li><strong>Application Programming Interface (APIs)</strong>. APIs create a communication pathway between the AI model and an organisation's internal systems.  For example, a computer vision API could give existing software access to the ability to process and analyse images.</li>
    <li><strong>Machine Learning (ML)</strong>. Access to ML frameworks allows businesses to use ML to analyse big data, identify trends and make predictions. For example, an on-demand video provider may use ML to serve a consumer with film and tv recommendations based on their previous viewing habits.</li>
</ul>
<p><strong>Benefits of AIaaS</strong></p>
<p>There are several clear benefits to using AIaaS. Firstly, pre-trained, off-the-shelf solutions are far more affordable than any solution that requires significant development work. A team of data scientists and engineers to build and maintain the solution are not required, nor is the purchase of extensive processing power to run a model in-house. Secondly, AIaaS avoids the need to invest significant time and effort to train any model, including training by people fine-tuning the solution. A basic service could theoretically be used 'out of the box' with minimal configuration. Lastly, AIaaS solutions (like most cloud-based solutions) are flexible, scalable, and designed to be integrated with standard enterprise IT systems. Customers have the option to turn features on or off and to scale up or down according to usage. </p>
<p><strong>Key issues when contracting for AIaaS</strong></p>
<p>Many of the contracting issues that apply to SaaS services generally will also apply in respect of AIaaS. However, several take on a new dimension due to the nature of AI and the current market practice around AIaaS.</p>
<ul>
    <li><strong>Implementation</strong>. Many tools will be advertised as 'out of the box'. That said, consider if any additional training or configuration work is required for the tool to operate as required. Much will also depend on the system architecture that the AIaaS will be used with. AIaaS, being relatively new tech, may require work to be integrated with legacy IT systems. </li>
    <li><strong>Usage obligations</strong>. In a normal SaaS context, customers would need to comply with obligations around usage (e.g. acceptable use codes) and businesses could mitigate this risk internally through employee policies and training.  AIaaS providers also require customers to comply with acceptable use codes however some go further than others.  Many AIaaS terms require businesses to ensure that they do not <em>generate</em> prohibited content using the service. This is a harder risk to mitigate - customers could set safety filters and policies around user prompts but it's impossible to predict what exactly would be generated. Many AIaaS tools also contain technical usage restrictions as generative AI is extremely resource intensive. If usage exceeds those limits the provider can block access to the service.   </li>
    <li><strong>Service performance</strong>. Does the provider offer clear assurances as to minimum standards for the service? Some AIaaS products are currently being sold in "preview" mode so do not include fixed service levels. Or providers may afford assurances for some performance standards (e.g. availability) but not others which may be less certain to them (e.g. response time). Consider therefore if this is sufficient for the intended use case – perhaps it's prudent to only experiment with the service rather than use it for any critical business purpose.  </li>
    <li><strong>Liability</strong>. SaaS products are typically provided 'as is' with limited warranties and liability on the provider. AIaaS is no different, and although the market practice here is still unclear it seems so far that generally speaking, the warranties and liability caps proffered by providers give customers less protection than in many SaaS arrangements. It has also been well publicised that large AIaaS providers currently provide customers with an indemnity against third party IP rights infringement. However, many of these are capped and only apply to their in-house models. Early adopters might be able to negotiate better bespoke terms with providers.</li>
    <li><strong>Pricing</strong>. SaaS is typically priced as pay-as-you-use which is convenient but poses the risk of unforeseen costs if an organisation does not have robust internal governance as to usage. This risk is exacerbated for AIaaS because of how new and untested the tech might be for a business. For example, there are various pricing models for AI chatbots but most charge per prompt.  If chatbots are new to a business, it may be difficult to accurately estimate how many prompts would be needed. Accidental overage may also trigger penalties. Similarly, an unexpected need for the provider to render additional professional services (e.g. to deploy or configure the service) may arise.  </li>
    <li><strong>Regulatory compliance</strong>. There is typically a lack of transparency with SaaS - the underlying infrastructure and processes are not generally made available to the customer. However, a key principle in AI regulation (including under data protection laws) is explainability and transparency i.e. understanding how the model works and relaying this to end users (see also <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/" target="_blank">AI and Ethics – the Digital Dilemma</a>). Ensure sufficient documentation is given by the provider to be able to comply with regulatory obligations.</li>
    <li><strong>Security</strong>. Cyber security is a known issue for SaaS. However, the sheer volumes of data sent to AIaaS and potentially stored offshore significantly increase the security risk. AI has also resulted in new types of security attacks (see <a href="/thinking/artificial-intelligence/ai-guide/procuring-ai-commercial-considerations-checklist/">Procuring AI – Commercial Considerations Checklist</a>) and threat actors will take advantage of businesses' delay to implement measures that keep up with these developments. Complying with the most up-to-date security standards and frameworks is one way to lower these risks.</li>
</ul>
<p> </p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></description><pubDate>Tue, 10 Jun 2025 11:38:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Paul Joukador, Tom James</authors:names><content:encoded><![CDATA[<p>Artificial Intelligence-as-a-Service (AIaaS), in the same vein as Software-as-a-Service and Infrastructure-as-a-Service, refers to cloud-based tools that allow businesses to gain access to an AI model hosted by a third party provider. As developing many AI tools from scratch is prohibitively expensive, the majority of AI solutions procured by businesses will involve some element of "as-a-Service" (i.e. using models built and hosted by third parties) although the extent of development and configuration overlaid on that will differ.  We have considered the commercial issues involved when procuring such AI solutions in <a href="/thinking/artificial-intelligence/ai-guide/procuring-ai-commercial-considerations-checklist/">Procuring AI – Commercial Considerations Checklist</a>. In this section we consider the issues that arise when procuring off-the-shelf AI tools typically provided by third parties on a one-to-many subscription-based model.</p>
<p><strong>Types of AIaaS</strong></p>
<p>The most common types of AIaaS are:</p>
<ul>
    <li><strong>Chatbots</strong>. Prior to the advancements in AI, chatbots provided answers to a predefined set of questions. However, chatbots are now powered by LLMs and, using natural language processing, can closely mimic actual human speech and deal with a wider array of issues.</li>
    <li><strong>Application Programming Interface (APIs)</strong>. APIs create a communication pathway between the AI model and an organisation's internal systems.  For example, a computer vision API could give existing software access to the ability to process and analyse images.</li>
    <li><strong>Machine Learning (ML)</strong>. Access to ML frameworks allows businesses to use ML to analyse big data, identify trends and make predictions. For example, an on-demand video provider may use ML to serve a consumer with film and tv recommendations based on their previous viewing habits.</li>
</ul>
<p><strong>Benefits of AIaaS</strong></p>
<p>There are several clear benefits to using AIaaS. Firstly, pre-trained, off-the-shelf solutions are far more affordable than any solution that requires significant development work. A team of data scientists and engineers to build and maintain the solution are not required, nor is the purchase of extensive processing power to run a model in-house. Secondly, AIaaS avoids the need to invest significant time and effort to train any model, including training by people fine-tuning the solution. A basic service could theoretically be used 'out of the box' with minimal configuration. Lastly, AIaaS solutions (like most cloud-based solutions) are flexible, scalable, and designed to be integrated with standard enterprise IT systems. Customers have the option to turn features on or off and to scale up or down according to usage. </p>
<p><strong>Key issues when contracting for AIaaS</strong></p>
<p>Many of the contracting issues that apply to SaaS services generally will also apply in respect of AIaaS. However, several take on a new dimension due to the nature of AI and the current market practice around AIaaS.</p>
<ul>
    <li><strong>Implementation</strong>. Many tools will be advertised as 'out of the box'. That said, consider if any additional training or configuration work is required for the tool to operate as required. Much will also depend on the system architecture that the AIaaS will be used with. AIaaS, being relatively new tech, may require work to be integrated with legacy IT systems. </li>
    <li><strong>Usage obligations</strong>. In a normal SaaS context, customers would need to comply with obligations around usage (e.g. acceptable use codes) and businesses could mitigate this risk internally through employee policies and training.  AIaaS providers also require customers to comply with acceptable use codes however some go further than others.  Many AIaaS terms require businesses to ensure that they do not <em>generate</em> prohibited content using the service. This is a harder risk to mitigate - customers could set safety filters and policies around user prompts but it's impossible to predict what exactly would be generated. Many AIaaS tools also contain technical usage restrictions as generative AI is extremely resource intensive. If usage exceeds those limits the provider can block access to the service.   </li>
    <li><strong>Service performance</strong>. Does the provider offer clear assurances as to minimum standards for the service? Some AIaaS products are currently being sold in "preview" mode so do not include fixed service levels. Or providers may afford assurances for some performance standards (e.g. availability) but not others which may be less certain to them (e.g. response time). Consider therefore if this is sufficient for the intended use case – perhaps it's prudent to only experiment with the service rather than use it for any critical business purpose.  </li>
    <li><strong>Liability</strong>. SaaS products are typically provided 'as is' with limited warranties and liability on the provider. AIaaS is no different, and although the market practice here is still unclear it seems so far that generally speaking, the warranties and liability caps proffered by providers give customers less protection than in many SaaS arrangements. It has also been well publicised that large AIaaS providers currently provide customers with an indemnity against third party IP rights infringement. However, many of these are capped and only apply to their in-house models. Early adopters might be able to negotiate better bespoke terms with providers.</li>
    <li><strong>Pricing</strong>. SaaS is typically priced as pay-as-you-use which is convenient but poses the risk of unforeseen costs if an organisation does not have robust internal governance as to usage. This risk is exacerbated for AIaaS because of how new and untested the tech might be for a business. For example, there are various pricing models for AI chatbots but most charge per prompt.  If chatbots are new to a business, it may be difficult to accurately estimate how many prompts would be needed. Accidental overage may also trigger penalties. Similarly, an unexpected need for the provider to render additional professional services (e.g. to deploy or configure the service) may arise.  </li>
    <li><strong>Regulatory compliance</strong>. There is typically a lack of transparency with SaaS - the underlying infrastructure and processes are not generally made available to the customer. However, a key principle in AI regulation (including under data protection laws) is explainability and transparency i.e. understanding how the model works and relaying this to end users (see also <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/" target="_blank">AI and Ethics – the Digital Dilemma</a>). Ensure sufficient documentation is given by the provider to be able to comply with regulatory obligations.</li>
    <li><strong>Security</strong>. Cyber security is a known issue for SaaS. However, the sheer volumes of data sent to AIaaS and potentially stored offshore significantly increase the security risk. AI has also resulted in new types of security attacks (see <a href="/thinking/artificial-intelligence/ai-guide/procuring-ai-commercial-considerations-checklist/">Procuring AI – Commercial Considerations Checklist</a>) and threat actors will take advantage of businesses' delay to implement measures that keep up with these developments. Complying with the most up-to-date security standards and frameworks is one way to lower these risks.</li>
</ul>
<p> </p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{F3D8121E-D0D1-4D8F-BD6D-EB1A54C1C98F}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/procuring-ai-commercial-considerations-checklist/</link><title>Procuring AI – commercial considerations checklist</title><description><![CDATA[<p>Many companies will no doubt be considering using AI within their business to take advantage of the massive opportunities for increased productivity and cost efficiencies promised. In this section, we set out the key issues a company will need to consider before procuring an AI-powered solution from a provider, assuming this is a relatively complex solution which requires customisation and deployment by the provider. Please see <a href="/thinking/artificial-intelligence/ai-guide/ai-as-a-service-key-issues/">AI-as-a-Service – Key Issues</a> if you are using simple off-the-shelf AI solutions. </p>
<p><strong>1.<span> </span>Create and comply with company AI use policies</strong></p>
<p>It is advisable to develop clear policies and training around the development and use of AI across the business. For example, do you have strong data governance processes around the access, storage, and transfer of the data sets used to ensure data quality, integrity and accuracy? And do you have a robust use case testing procedure to ensure that AI is only used where appropriate, given the potential harms and risks associated with its use?  Of course, policies only work to the extent you monitor compliance with them regularly.</p>
<p><strong>2.<span> </span>Define your strategy and budget</strong></p>
<p>A focused strategy is particularly important when procuring an AI solution. AI is impressive but many AI-powered solutions have not been tested on a large scale. Any AI solution you procure will likely need significant on-boarding and fine-tuning for it to work as planned. </p>
<p>For this reason, consider how exactly you intend to integrate the AI solution in your business and where there are likely to be opportunities to maximise benefits in the short term, and how the AI can be leveraged in the longer term. Be clear on your available budget for this project and factor in sufficient time, money and human resource for the procurement, deployment and on-going training and maintenance of the solution. If the AI solution works as planned, is the anticipated return on investment acceptable?  </p>
<p><strong>3.<span> </span>Scope your requirements</strong></p>
<p>Unlike a standard IT procurement, you may not be able to define waterfall-style specifications for your intended solution. Many AI solutions will need a period of iterative experimentation, training and tweaking before the functionality is properly developed. Furthermore, you would not want to blinker yourself into working off narrow requirements when there might be greater opportunities available. Instead, develop problem statements, challenges, opportunities and use cases that you intend the AI solution to address. </p>
<p>You should also assess the relevant data sets that you intend to use for the project. Where is the data sourced from? Do you have a licence to use the data for the project or might you be breaching confidentiality restrictions? Do the data sets include personal data (see <a href="/thinking/artificial-intelligence/ai-guide/ai-and-privacy-10-questions-to-ask/">AI and Privacy – 10 Questions to Ask</a> for further guidance)? Are there any limitations (e.g. quality) to this data that need to be addressed first?  On what basis will you share data with the provider for use on the AI system (if any)? Can you use synthetic or anonymised data for training purposes to avoid issues with the data or to fill data gaps?</p>
<p>Conduct an initial impact assessment to determine the key risks of the solution on your business and whether these can be mitigated. Will the AI be used with other software, and if so, do you have the appropriate rights and licences to do so from the relevant third party software suppliers? Is the use and/or commercial parameters of the other software still appropriate when being used with the AI given, amongst other things, automated processing? Will your insurance cover apply where AI is used?</p>
<p><strong>4.<span> </span>Upskill your teams</strong></p>
<p>Any successful AI project requires that customer and provider teams work cooperatively to develop the solution. You will need to create your own multidisciplinary team of experts to advise you through the procurement lifecycle including legal and commercial experts, technologists, data and systems engineers, and ethics advisors. Consider what training you might need to get these teams up to speed with AI developments and considerations – see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/tech/six-steps-to-ai-literacy/" target="_blank">here</a> for more on AI literacy. A diverse team is recommended to minimise blind spots and unintended bias. </p>
<p><strong>5.<span> </span>Choose a provider</strong></p>
<p>It is critical to do your homework on the provider and the AI product in question and not simply be swayed by the sales pitch. Over the next months and years we will see AI solutions and providers consolidate, and some others fail, so it's important not to back the wrong horse. Review any potential provider's project team to ensure they are diverse, multidisciplinary, and have the right skills and qualifications. Alternatively, is the best way to source the AI solution through a combination of providers and, if so, have you considered the integration risk between these various systems?</p>
<p><strong>6.<span> </span>Key contract issues</strong></p>
<p>Consider the following key issues when contracting with your chosen provider. </p>
<p><strong>Contract structure</strong>. Consider structuring the contract in phases to allow for discovery and development. A 3-6 month pilot phase (longer or shorter depending on the complexity of the project) may be appropriate. Ensure that the overall contract is aimed towards flexibility, the ability to change, and iterative product development.   </p>
<p><strong>Use of data</strong>. The contract should specify the customer data sets that the provider will have access to, and what the provider can do with such data. Do you need the provider to remedy any limitations with the data before using it? Will the solution have access to the internet or will it be a "walled garden"? Will your data be used to train the model generally for the benefit of the provider's other customers? Will you benefit from any new fine-tuning or user feedback data the provider applies to its model generally? </p>
<p><strong>Training the model</strong>. How will the parties train the AI solution? Consider a governance framework that sets out the parties' responsibilities for each aspect of the training. How long will the solution need to be trained before it can go live? Will the provider continue to train the solution on a regular basis post go-live?</p>
<p><strong>Intellectual Property</strong>. General purpose LLMs will have been trained using data obtained from web scraping, the IP implications of which are still being debated (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/generative-ai-addressing-copyright/" target="_blank">Generative AI - Addressing Copyright</a>). The provider may also carry out further training based on owned or licensed data sets. Seek warranty and indemnity protection that your use of the solution and any output does not infringe third party IP rights. Consider also if you need to own the IP in any output and if the provider should be required to assign the IP in such material to you. Note, however, that the legal position on copyright in AI-generated works is still unclear (see [<a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/generative-ai-addressing-copyright/" target="_blank">Generative AI - Addressing Copyright</a>]).   </p>
<p><strong>Testing and acceptance</strong>. Prior to go live, the provider should be required to test the AI solution's functionality within the customer's IT environment. Seek to include specific metrics to determine when an AI system is ready to go live. Consider what tools are available to confirm that, under the bonnet, the AI is also working as intended. This might be through audit rights or verification tools offered by your provider or the right to have an independent review or by using other third party tools to make that assessment. In any case, repeated testing and validation will need to be an ongoing process that continues through implementation.</p>
<p><strong>Performance of the solution</strong>. Consider the service standards and service levels you require of the AI solution and endeavour to make these objective and quantifiable. Providers should be required to comply with internationally-recognised standards on AI systems, for example, ISO/IEC 42001 that provides a certifiable AI management system framework focused on strong AI governance (see also <a href="/thinking/artificial-intelligence/ai-guide/part-5-ai-regulation-globally/">Part 5 - AI Regulation Globally</a> for more on international standards).  The contract should also include an agreed process for the provider to investigate and fix errors and hallucinations. Output should be tested for discrimination and unfairness, and to ensure that the tool and outputs comply with ethical principles – see <a href="/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/">The Ethics of AI – The Digital Dilemma</a>. </p>
<p>Over time, service levels and standards should increase as the model improves. Similarly, any concept of "Good Industry Practice" (and an obligation on the provider to comply with it) will evolve as the industry adapts to the new technology.  Benchmarking by third parties will be important to periodically confirm that the AI solution remains comparable to other models. Considering the breakneck speed at which AI is developing, the provider should be required to continually improve its solution and ensure it is state of the art. Consider how new updates will be applied to your solution.   </p>
<p><strong>Collaboration and governance</strong>. The success of the project will depend on whether the parties are able to work openly and collaboratively whilst understanding their respective roles and responsibilities. AI solutions learn from humans (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/what-is-a-foundational-model/" target="_blank">here</a> for more on reinforced learning from human feedback) so there should be a process for the provider to gather feedback from the end users so that the solution improves. Regular governance meetings (more frequent during the development phase) are also crucial to ensuring that the project stays on track and that issues are dealt with early and swiftly. The provider should also be required to implement and maintain appropriate policies and processes to ensure that the AI system operates responsibly, ethically and in compliance with the law. </p>
<p><strong>Explainability and transparency</strong>. You must be able to explain how your AI system works as you will need to demonstrate that AI is used responsibly and appropriately (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/" target="_blank">AI and Ethics – The Digital Dilemma</a> for more on explainability). For an AI system to be explainable, you will need the provider to provide you with detailed information on:</p>
<ul>
    <li>how the AI works</li>
    <li>how it has been trained and the sources of training data </li>
    <li>how it was tested</li>
    <li>how it has been designed to be fair</li>
    <li>the logic behind the output of the AI. </li>
</ul>
<p>Records should be produced of how the AI system was developed including the parameters that represent the model's learnt 'knowledge' used to produce the intended output. The solution should also be designed to log information regarding how decisions are made, so that these can be verified and explained if necessary. This information will allow you to meet your transparency obligations under law (e.g. to your data subjects and see here for more on data protection considerations). </p>
<p><strong>Risk allocation</strong>. At this stage, the market is still getting to grips with AI and has yet to develop established positions on risk allocation. AI providers are building their customer base and working to make their offering more attractive. For this reason, customers who contract early may be in a stronger position when negotiating liability clauses under their contracts. In any case, risk allocation will depend on the commercials and each party's role and responsibilities in relation to the project. For example, a customer is unlikely to get blanket indemnities from the supplier for third party IP infringement claims if a significant portion of the training is carried out by the customer using the customer's own data.  </p>
<p><strong>Security</strong>. Aside from the standard security issues that would arise in any tech procurement (e.g. data encryption, user controls etc), there are certain security threats which are novel to AI systems. These include prompt injection (bad actors instructing the model to perform actions you don't intend), prompt stealing (accessing a user's prompt), model inversion attacks (attempting to recover the dataset used to train a model), membership inference attacks (attempting to determine if certain information was used to train a model), and data poisoning (tampering with data sets). The provider must ensure that its solution is appropriately secure against known threats and have robust processes to address future threats. At the very least, solutions should comply with generally-accepted standards on cyber security, for example, the National Cyber Security Centre's Guidelines for Secure AI System Development – see (see also <a href="/thinking/artificial-intelligence/ai-guide/part-5-ai-regulation-globally/">Part 5 – AI Regulation globally</a> for more on standards). There are also specialist escrow providers with whom you may store input data, algorithms, and AI applications to ensure that the solution remains available should the provider unexpectedly cease operations. </p>
<p><strong>Compliance</strong>. Consider who bears the risk and the cost of compliance with AI regulations as they change from time to time. New regulations may result in significant changes needing to be made to the solution, potentially down to the underlying infrastructure level. A regulatory change control procedure should also be included to set out a process by which the parties may agree and implement required changes and allocate the costs of the same.  </p>
<p><strong>Exit</strong>. Although not particular to AI solutions, vendor lock-in is a risk that is heightened when procuring AI due to its complexity. You minimise this risk if you and your potential replacement providers are able to understand how the solution works. The incumbent provider should be required to train your personnel and ensure knowledge transfer over the lifecycle of the project. At the outset, consider the interoperability of the solution you procure with other suppliers' models, software, and systems. Consider also the data you will need to migrate the solution to a replacement provider upon exit from the contract.</p>
<p> </p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></description><pubDate>Tue, 10 Jun 2025 11:37:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Paul Joukador</authors:names><content:encoded><![CDATA[<p>Many companies will no doubt be considering using AI within their business to take advantage of the massive opportunities for increased productivity and cost efficiencies promised. In this section, we set out the key issues a company will need to consider before procuring an AI-powered solution from a provider, assuming this is a relatively complex solution which requires customisation and deployment by the provider. Please see <a href="/thinking/artificial-intelligence/ai-guide/ai-as-a-service-key-issues/">AI-as-a-Service – Key Issues</a> if you are using simple off-the-shelf AI solutions. </p>
<p><strong>1.<span> </span>Create and comply with company AI use policies</strong></p>
<p>It is advisable to develop clear policies and training around the development and use of AI across the business. For example, do you have strong data governance processes around the access, storage, and transfer of the data sets used to ensure data quality, integrity and accuracy? And do you have a robust use case testing procedure to ensure that AI is only used where appropriate, given the potential harms and risks associated with its use?  Of course, policies only work to the extent you monitor compliance with them regularly.</p>
<p><strong>2.<span> </span>Define your strategy and budget</strong></p>
<p>A focused strategy is particularly important when procuring an AI solution. AI is impressive but many AI-powered solutions have not been tested on a large scale. Any AI solution you procure will likely need significant on-boarding and fine-tuning for it to work as planned. </p>
<p>For this reason, consider how exactly you intend to integrate the AI solution in your business and where there are likely to be opportunities to maximise benefits in the short term, and how the AI can be leveraged in the longer term. Be clear on your available budget for this project and factor in sufficient time, money and human resource for the procurement, deployment and on-going training and maintenance of the solution. If the AI solution works as planned, is the anticipated return on investment acceptable?  </p>
<p><strong>3.<span> </span>Scope your requirements</strong></p>
<p>Unlike a standard IT procurement, you may not be able to define waterfall-style specifications for your intended solution. Many AI solutions will need a period of iterative experimentation, training and tweaking before the functionality is properly developed. Furthermore, you would not want to blinker yourself into working off narrow requirements when there might be greater opportunities available. Instead, develop problem statements, challenges, opportunities and use cases that you intend the AI solution to address. </p>
<p>You should also assess the relevant data sets that you intend to use for the project. Where is the data sourced from? Do you have a licence to use the data for the project or might you be breaching confidentiality restrictions? Do the data sets include personal data (see <a href="/thinking/artificial-intelligence/ai-guide/ai-and-privacy-10-questions-to-ask/">AI and Privacy – 10 Questions to Ask</a> for further guidance)? Are there any limitations (e.g. quality) to this data that need to be addressed first?  On what basis will you share data with the provider for use on the AI system (if any)? Can you use synthetic or anonymised data for training purposes to avoid issues with the data or to fill data gaps?</p>
<p>Conduct an initial impact assessment to determine the key risks of the solution on your business and whether these can be mitigated. Will the AI be used with other software, and if so, do you have the appropriate rights and licences to do so from the relevant third party software suppliers? Is the use and/or commercial parameters of the other software still appropriate when being used with the AI given, amongst other things, automated processing? Will your insurance cover apply where AI is used?</p>
<p><strong>4.<span> </span>Upskill your teams</strong></p>
<p>Any successful AI project requires that customer and provider teams work cooperatively to develop the solution. You will need to create your own multidisciplinary team of experts to advise you through the procurement lifecycle including legal and commercial experts, technologists, data and systems engineers, and ethics advisors. Consider what training you might need to get these teams up to speed with AI developments and considerations – see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/tech/six-steps-to-ai-literacy/" target="_blank">here</a> for more on AI literacy. A diverse team is recommended to minimise blind spots and unintended bias. </p>
<p><strong>5.<span> </span>Choose a provider</strong></p>
<p>It is critical to do your homework on the provider and the AI product in question and not simply be swayed by the sales pitch. Over the next months and years we will see AI solutions and providers consolidate, and some others fail, so it's important not to back the wrong horse. Review any potential provider's project team to ensure they are diverse, multidisciplinary, and have the right skills and qualifications. Alternatively, is the best way to source the AI solution through a combination of providers and, if so, have you considered the integration risk between these various systems?</p>
<p><strong>6.<span> </span>Key contract issues</strong></p>
<p>Consider the following key issues when contracting with your chosen provider. </p>
<p><strong>Contract structure</strong>. Consider structuring the contract in phases to allow for discovery and development. A 3-6 month pilot phase (longer or shorter depending on the complexity of the project) may be appropriate. Ensure that the overall contract is aimed towards flexibility, the ability to change, and iterative product development.   </p>
<p><strong>Use of data</strong>. The contract should specify the customer data sets that the provider will have access to, and what the provider can do with such data. Do you need the provider to remedy any limitations with the data before using it? Will the solution have access to the internet or will it be a "walled garden"? Will your data be used to train the model generally for the benefit of the provider's other customers? Will you benefit from any new fine-tuning or user feedback data the provider applies to its model generally? </p>
<p><strong>Training the model</strong>. How will the parties train the AI solution? Consider a governance framework that sets out the parties' responsibilities for each aspect of the training. How long will the solution need to be trained before it can go live? Will the provider continue to train the solution on a regular basis post go-live?</p>
<p><strong>Intellectual Property</strong>. General purpose LLMs will have been trained using data obtained from web scraping, the IP implications of which are still being debated (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/generative-ai-addressing-copyright/" target="_blank">Generative AI - Addressing Copyright</a>). The provider may also carry out further training based on owned or licensed data sets. Seek warranty and indemnity protection that your use of the solution and any output does not infringe third party IP rights. Consider also if you need to own the IP in any output and if the provider should be required to assign the IP in such material to you. Note, however, that the legal position on copyright in AI-generated works is still unclear (see [<a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/generative-ai-addressing-copyright/" target="_blank">Generative AI - Addressing Copyright</a>]).   </p>
<p><strong>Testing and acceptance</strong>. Prior to go live, the provider should be required to test the AI solution's functionality within the customer's IT environment. Seek to include specific metrics to determine when an AI system is ready to go live. Consider what tools are available to confirm that, under the bonnet, the AI is also working as intended. This might be through audit rights or verification tools offered by your provider or the right to have an independent review or by using other third party tools to make that assessment. In any case, repeated testing and validation will need to be an ongoing process that continues through implementation.</p>
<p><strong>Performance of the solution</strong>. Consider the service standards and service levels you require of the AI solution and endeavour to make these objective and quantifiable. Providers should be required to comply with internationally-recognised standards on AI systems, for example, ISO/IEC 42001 that provides a certifiable AI management system framework focused on strong AI governance (see also <a href="/thinking/artificial-intelligence/ai-guide/part-5-ai-regulation-globally/">Part 5 - AI Regulation Globally</a> for more on international standards).  The contract should also include an agreed process for the provider to investigate and fix errors and hallucinations. Output should be tested for discrimination and unfairness, and to ensure that the tool and outputs comply with ethical principles – see <a href="/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/">The Ethics of AI – The Digital Dilemma</a>. </p>
<p>Over time, service levels and standards should increase as the model improves. Similarly, any concept of "Good Industry Practice" (and an obligation on the provider to comply with it) will evolve as the industry adapts to the new technology.  Benchmarking by third parties will be important to periodically confirm that the AI solution remains comparable to other models. Considering the breakneck speed at which AI is developing, the provider should be required to continually improve its solution and ensure it is state of the art. Consider how new updates will be applied to your solution.   </p>
<p><strong>Collaboration and governance</strong>. The success of the project will depend on whether the parties are able to work openly and collaboratively whilst understanding their respective roles and responsibilities. AI solutions learn from humans (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/what-is-a-foundational-model/" target="_blank">here</a> for more on reinforced learning from human feedback) so there should be a process for the provider to gather feedback from the end users so that the solution improves. Regular governance meetings (more frequent during the development phase) are also crucial to ensuring that the project stays on track and that issues are dealt with early and swiftly. The provider should also be required to implement and maintain appropriate policies and processes to ensure that the AI system operates responsibly, ethically and in compliance with the law. </p>
<p><strong>Explainability and transparency</strong>. You must be able to explain how your AI system works as you will need to demonstrate that AI is used responsibly and appropriately (see <a rel="noopener noreferrer" href="https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/the-ethics-of-ai-the-digital-dilemma/" target="_blank">AI and Ethics – The Digital Dilemma</a> for more on explainability). For an AI system to be explainable, you will need the provider to provide you with detailed information on:</p>
<ul>
    <li>how the AI works</li>
    <li>how it has been trained and the sources of training data </li>
    <li>how it was tested</li>
    <li>how it has been designed to be fair</li>
    <li>the logic behind the output of the AI. </li>
</ul>
<p>Records should be produced of how the AI system was developed including the parameters that represent the model's learnt 'knowledge' used to produce the intended output. The solution should also be designed to log information regarding how decisions are made, so that these can be verified and explained if necessary. This information will allow you to meet your transparency obligations under law (e.g. to your data subjects and see here for more on data protection considerations). </p>
<p><strong>Risk allocation</strong>. At this stage, the market is still getting to grips with AI and has yet to develop established positions on risk allocation. AI providers are building their customer base and working to make their offering more attractive. For this reason, customers who contract early may be in a stronger position when negotiating liability clauses under their contracts. In any case, risk allocation will depend on the commercials and each party's role and responsibilities in relation to the project. For example, a customer is unlikely to get blanket indemnities from the supplier for third party IP infringement claims if a significant portion of the training is carried out by the customer using the customer's own data.  </p>
<p><strong>Security</strong>. Aside from the standard security issues that would arise in any tech procurement (e.g. data encryption, user controls etc), there are certain security threats which are novel to AI systems. These include prompt injection (bad actors instructing the model to perform actions you don't intend), prompt stealing (accessing a user's prompt), model inversion attacks (attempting to recover the dataset used to train a model), membership inference attacks (attempting to determine if certain information was used to train a model), and data poisoning (tampering with data sets). The provider must ensure that its solution is appropriately secure against known threats and have robust processes to address future threats. At the very least, solutions should comply with generally-accepted standards on cyber security, for example, the National Cyber Security Centre's Guidelines for Secure AI System Development – see (see also <a href="/thinking/artificial-intelligence/ai-guide/part-5-ai-regulation-globally/">Part 5 – AI Regulation globally</a> for more on standards). There are also specialist escrow providers with whom you may store input data, algorithms, and AI applications to ensure that the solution remains available should the provider unexpectedly cease operations. </p>
<p><strong>Compliance</strong>. Consider who bears the risk and the cost of compliance with AI regulations as they change from time to time. New regulations may result in significant changes needing to be made to the solution, potentially down to the underlying infrastructure level. A regulatory change control procedure should also be included to set out a process by which the parties may agree and implement required changes and allocate the costs of the same.  </p>
<p><strong>Exit</strong>. Although not particular to AI solutions, vendor lock-in is a risk that is heightened when procuring AI due to its complexity. You minimise this risk if you and your potential replacement providers are able to understand how the solution works. The incumbent provider should be required to train your personnel and ensure knowledge transfer over the lifecycle of the project. At the outset, consider the interoperability of the solution you procure with other suppliers' models, software, and systems. Consider also the data you will need to migrate the solution to a replacement provider upon exit from the contract.</p>
<p> </p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{FDB19908-1295-442B-8940-A2632EC9A899}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/regulation-of-ai-introduction/</link><title>Regulation of AI - introduction</title><description><![CDATA[<p>As with any new technology, existing data protection and privacy, intellectual property, competition, product liability, data security and consumer laws apply to its application in each jurisdiction.  This has thrown up a number of important and newsworthy issues and considerations for AI developers and providers, legislators, consumers and rights holders. There are also several sets of high profile legal proceedings both decided and ongoing in several jurisdictions. These issues and legal proceedings are discussed in other sections of this AI Guide.</p>
<p>Going forward, "providers" of AI systems, created either from scratch or built on top of tools and services provided by others, and "deployers" (i.e. a natural or legal person using an AI system under its authority, but not in a personal non-professional capacity) need to know what they can and cannot do in the design, development, procurement, deployment and operation of their AI systems. Understanding the national and international landscape is key to them being able to formulate an AI strategy. For example, UK companies that deploy AI systems or use AI powered tech in or targeted at the EU will come within the scope of the EU AI Act.  </p>
<p>Despite a growing and complicated web of overlapping global standards and alliances, these providers and deployers will be operating in an AI market regulated on a territory by territory basis. Some jurisdictions, like the UK have adopted a balanced pro-innovation approach to attract investment and development of AI in the UK. However, it's difficult to see how this approach fits with the UK's close proximity to the EU and the EU's pro-regulation approach. And on the other hand, the UK is keen to align with the US' extreme pro-innovation stance. Providers intending global expansion, however, may decide to meet the higher EU regulatory obligations to streamline their compliance requirements which could lead to the EU establishing an international AI standard as it has arguably done with data protection and the GDPR.  </p>
<p>More details on AI regulation are set out in:</p>
<ul>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/">Part 1 – AI Regulation in the UK</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-2-ai-regulation-in-the-eu/">Part 2 – AI Regulation in the EU</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-3-ai-regulation-in-the-us/">Part 3 – AI Regulation in the US</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-4-ai-regulation-in-asia/">Part 4 – AI Regulation in Asia</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-5-ai-regulation-globally/">Part 5 – AI Regulation globally</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-6-practical-considerations/">Part 6 – Practical considerations</a>]</li>
</ul>
<p><em>Discover more insights on the <a href="https://www.rpc.co.uk/rpc-ai-guide/">AI guide</a></em></p>]]></description><pubDate>Tue, 10 Jun 2025 11:36:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Caroline Tuck, Ricky Cella</authors:names><content:encoded><![CDATA[<p>As with any new technology, existing data protection and privacy, intellectual property, competition, product liability, data security and consumer laws apply to its application in each jurisdiction.  This has thrown up a number of important and newsworthy issues and considerations for AI developers and providers, legislators, consumers and rights holders. There are also several sets of high profile legal proceedings both decided and ongoing in several jurisdictions. These issues and legal proceedings are discussed in other sections of this AI Guide.</p>
<p>Going forward, "providers" of AI systems, created either from scratch or built on top of tools and services provided by others, and "deployers" (i.e. a natural or legal person using an AI system under its authority, but not in a personal non-professional capacity) need to know what they can and cannot do in the design, development, procurement, deployment and operation of their AI systems. Understanding the national and international landscape is key to them being able to formulate an AI strategy. For example, UK companies that deploy AI systems or use AI powered tech in or targeted at the EU will come within the scope of the EU AI Act.  </p>
<p>Despite a growing and complicated web of overlapping global standards and alliances, these providers and deployers will be operating in an AI market regulated on a territory by territory basis. Some jurisdictions, like the UK have adopted a balanced pro-innovation approach to attract investment and development of AI in the UK. However, it's difficult to see how this approach fits with the UK's close proximity to the EU and the EU's pro-regulation approach. And on the other hand, the UK is keen to align with the US' extreme pro-innovation stance. Providers intending global expansion, however, may decide to meet the higher EU regulatory obligations to streamline their compliance requirements which could lead to the EU establishing an international AI standard as it has arguably done with data protection and the GDPR.  </p>
<p>More details on AI regulation are set out in:</p>
<ul>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-1-uk-ai-regulation/">Part 1 – AI Regulation in the UK</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-2-ai-regulation-in-the-eu/">Part 2 – AI Regulation in the EU</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-3-ai-regulation-in-the-us/">Part 3 – AI Regulation in the US</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-4-ai-regulation-in-asia/">Part 4 – AI Regulation in Asia</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-5-ai-regulation-globally/">Part 5 – AI Regulation globally</a>]</li>
    <li>[<a href="/thinking/artificial-intelligence/ai-guide/part-6-practical-considerations/">Part 6 – Practical considerations</a>]</li>
</ul>
<p><em>Discover more insights on the <a href="https://www.rpc.co.uk/rpc-ai-guide/">AI guide</a></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{80333C68-7893-41D4-9CA9-B1433C6A84DD}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/what-is-ai-and-why-is-it-so-topical/</link><title>What is AI and why is it topical?</title><description><![CDATA[<p>Whilst there is no universal definition of what constitutes artificial intelligence, at its core, AI refers to the simulation of human intelligence in machines that are programmed to think and learn like humans.  This encompasses the ability to reason, learn from experience, understand complex concepts, interact with their environment and look to solve problems.  </p>
<p>In its AI White Paper published in March 2023, the UK government avoided a rigid definition of AI on the basis that it may quickly become outdated and restrictive, given the pace of developments in the technology.  Instead, the government sought to define AI by reference to the two key characteristics of AI that give rise to the need for a bespoke regulatory response, namely adaptivity and autonomy.  The ‘adaptivity’ of AI can make it difficult to explain the intent or logic of the system’s outcomes.  AI systems operate by inferring patterns and connections in data, and quite often the logic and decision-making in AI systems cannot always be understood, or meaningfully explained in a way that can be understood, by humans.  Furthermore, AI systems may develop the ability to perform new forms of inference not anticipated by the developers of the system.  Some AI systems can make decisions without the express intent or control of a human.  This ‘autonomy’ gives rise to many concerns, one of which is that it becomes difficult to assign responsibility for outcomes caused by AI systems.  </p>
<p>By contrast, the EU AI Act defines an AI system as "a machine-based system that … infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments". </p>
<p>Technically, AI is often an umbrella term used to describe a range of technologies, from simple rule-based algorithms to complex neural networks mimicking the human brain. A significant turning point in AI has been the development of sophisticated Large Language Models (LLMs).  These models have revolutionised natural language processing, demonstrating capabilities in generating human-like text, translating languages and even coding.  AI's ability to interpret, understand and classify visual data has also seen remarkable growth, as has AI-driven automation.  These are only a number of examples of AI systems which are in use, and each reflects a part of the diverse landscape of current AI capabilities.  </p>
<p>We have seen already how far AI technology has come to date.  In fact, it has already become deeply integrated into various aspects of modern life. Moving forward, how far will (and should) it go?</p>
<p> </p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></description><pubDate>Tue, 10 Jun 2025 11:34:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Paul Joukador, Helen Armstrong, Charles Buckworth, Caroline Tuck</authors:names><content:encoded><![CDATA[<p>Whilst there is no universal definition of what constitutes artificial intelligence, at its core, AI refers to the simulation of human intelligence in machines that are programmed to think and learn like humans.  This encompasses the ability to reason, learn from experience, understand complex concepts, interact with their environment and look to solve problems.  </p>
<p>In its AI White Paper published in March 2023, the UK government avoided a rigid definition of AI on the basis that it may quickly become outdated and restrictive, given the pace of developments in the technology.  Instead, the government sought to define AI by reference to the two key characteristics of AI that give rise to the need for a bespoke regulatory response, namely adaptivity and autonomy.  The ‘adaptivity’ of AI can make it difficult to explain the intent or logic of the system’s outcomes.  AI systems operate by inferring patterns and connections in data, and quite often the logic and decision-making in AI systems cannot always be understood, or meaningfully explained in a way that can be understood, by humans.  Furthermore, AI systems may develop the ability to perform new forms of inference not anticipated by the developers of the system.  Some AI systems can make decisions without the express intent or control of a human.  This ‘autonomy’ gives rise to many concerns, one of which is that it becomes difficult to assign responsibility for outcomes caused by AI systems.  </p>
<p>By contrast, the EU AI Act defines an AI system as "a machine-based system that … infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments". </p>
<p>Technically, AI is often an umbrella term used to describe a range of technologies, from simple rule-based algorithms to complex neural networks mimicking the human brain. A significant turning point in AI has been the development of sophisticated Large Language Models (LLMs).  These models have revolutionised natural language processing, demonstrating capabilities in generating human-like text, translating languages and even coding.  AI's ability to interpret, understand and classify visual data has also seen remarkable growth, as has AI-driven automation.  These are only a number of examples of AI systems which are in use, and each reflects a part of the diverse landscape of current AI capabilities.  </p>
<p>We have seen already how far AI technology has come to date.  In fact, it has already become deeply integrated into various aspects of modern life. Moving forward, how far will (and should) it go?</p>
<p> </p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{0D4CADD6-1A69-447A-BFEB-DAB449972F7A}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/generative-artificial-intelligence-risks-for-litigation-lawyers/</link><title>Generative Artificial Intelligence Risks for Litigation Lawyers</title><description><![CDATA[In R (on the application of Frederick Ayinde) v The London Borough of Haringey AC-2024-LON-003062 the President of the King's Bench Division (Dame Victoria Sharpe) and Mr Justice Johnson gave judgment in two  referrals that had been made under the Hamid  jurisdiction. That jurisdiction is the court's inherent jurisdiction to regulate its own procedures and enforce the obligations that lawyers owe to it. ]]></description><pubDate>Mon, 09 Jun 2025 11:54:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Nick Bird, Cheryl Laird</authors:names><enclosure url="https://www.rpclegal.com/-/media/rpc/redesign-images/thinking-tiles/wide/tech-media-1---thinking-tile-wide.jpg?rev=ee4cf7f6fb8048c5b8fbba82117fa558&amp;hash=B2A6FCC6F2975DF2B5BF91ABB37D548D" type="image/jpeg" medium="image" /><content:encoded><![CDATA[<p>In R (on the application of Frederick Ayinde) v The London Borough of Haringey AC-2024-LON-003062 the President of the King's Bench Division (Dame Victoria Sharpe) and Mr Justice Johnson gave judgment in two[1] referrals that had been made under the Hamid[2] jurisdiction. That jurisdiction is the court's inherent jurisdiction to regulate its own procedures and enforce the obligations that lawyers owe to it. </p>
<p>The referrals arose out of non-existent citations made by lawyers in legal documents filed with the court. The court set out comprehensive and clear guidance about the responsibilities of lawyers in relation to the use of generative artificial intelligence in legal proceedings, the means by which citations can be checked, and the serious implications for the administration of justice and public confidence in the justice system arising out of the misuse of artificial intelligence. It emphasised too the need for those in leadership roles in law firms, chambers and regulators to take measures to ensure that every individual providing legal services in the jurisdiction understands and complies with their legal and ethical duties in the use of artificial intelligence.</p>
<p>The court held that the existing guidance from regulators was on its own insufficient to prevent the misuse of artificial intelligence and that more had to be done. The court invited the Bar Council, the Law Society, and the Council of the Inns of Court to consider what more needs now to be done "<em>as a matter of urgency</em>". The regulators will, no doubt, now review the existing regulatory material and framework in the light of the judgment.</p>
<p>The judgment starts with an explanation of the current guidance about the limitations of artificial intelligence and the risks of using it for legal research. This includes the BSB's January 2024 guidance on the use of generative AI, the SRA's 20 November 2023 Risk Outlook report, the 8 October 2023 BSB blog on Chat GPT in the Courts (referring in turn to the American case <em>Mata v Avianca, Inc.</em>), and the judicial guidance published first in December 2023 and updated in April 2025. All of this is to the effect that generative artificial intelligence can provide fabricated and/or biased and/or inaccurate output and that the professional and ethical responsibility is on the lawyer to check all generated material that she or he relies on.</p>
<p>It then proceeds to outline the relevant regulatory obligations on barristers and solicitors.</p>
<p>For barristers this focuses on Core Duties in the BSB Handbook including the duty to the court in the administration of justice (CD1), acting with honesty and integrity (CD3), not diminishing the public trust and confidence in the profession (CD5), and providing a competent standard of work (CD7). That is aimed at achieving outcomes where the court is able to rely on the information provided to it by barristers (Outcome 1), the administration of justice is properly served (Outcome 2), and barristers understand their duty to the court (Outcome 4). It further sets out the rules preventing any misleading of the court and drafting accurate and appropriate documents. At paragraph 21 the court outlined the requirements in the Bar Qualification Manual around the training of pupil supervisors and the supervision and signing off of pupil barristers. </p>
<p>For solicitors the court outlined provisions in the SRA's Code of Conduct for Solicitors and in particular the duty not to mislead the court or others (rule 1.4), the duty not to put forward statements that are not properly arguable (rule 2.4), not to waste the court's time (rule 2.6), to draw the court's attention to relevant authorities (rule 2.7), to provide a competent service (rule 3.2) and to remain accountable for work conducted on the solicitor's behalf by others (rule 3.5).</p>
<p>It then outlined the court's powers to ensure that lawyers comply with their duties to the court. Those include public admonition of the lawyer, costs orders, wasted costs orders, striking out of a case, referral to regulators, initiation of contempt proceedings, and referral to the police. The appropriate response turns on the facts of the case and is likely to include (a) "<em>the importance of setting and enforcing proper standards</em>" (b) the circumstances under which the material came to be before the court (c) the candour of any immediate response given to the court and other parties (d) steps taken to mitigate the damage (e) the effect on the court and other parties (f) the impact on the litigation and (g) the overriding objective.</p>
<h4>The<em> Ayinde </em>case</h4>
<p><em></em><em style="font-size: 18.5px;">Ayinde</em><span style="font-size: 18px;"> is a claim for judicial review of a housing decision by a local authority. The claimant was represented by Haringey Law Centre and a pupil barrister. The </span><em style="font-size: 18.5px;">Ayinde</em><span style="font-size: 18px;"> pupil settled grounds for the judicial review that misstated the effect of section 188(3) of the Housing Act 1996 and relied on five cases which do not exist. The issue became apparent to the defendant who then raised the point with the claimant's lawyers. Haringey Law Centre provided a response drafted for the most part by the </span><em style="font-size: 18.5px;">Ayinde </em><span style="font-size: 18px;">counsel. It suggested that the erroneous citations could be easily explained but that it was not necessary to do so and that the citations could be corrected before the hearing in April. It gave its "</span><em style="font-size: 18.5px;">deepest apologies</em><span style="font-size: 18px;">" but also described the errors as "</span><em style="font-size: 18.5px;">cosmetic</em><span style="font-size: 18px;">". The defendant then applied for wasted costs against Haringey Law Centre and the </span><em style="font-size: 18.5px;">Ayinde</em><span style="font-size: 18px;"> pupil. That application came on before Ritchie J on 3 April 2025. The </span><em style="font-size: 18.5px;">Ayinde </em><span style="font-size: 18px;">pupil did not give formal evidence at that hearing but provided and explanation for the error that did not include the use of artificial intelligence.</span></p>
<p><span style="font-size: 18px;"></span>In his judgment the same day (<em>R (on the application of Frederick Ayinde) v The London Borough of Haringey </em>[2025] EWHC 1040 (Admin)) Ritchie J rejected that explanation but felt unable to make a positive finding that the <em>Ayinde</em> pupil had used artificial intelligence. He found that the <em>Ayinde </em>pupil had intentionally put the citations into the grounds not caring whether they existed and that that was improper and unreasonable conduct. He said that if artificial intelligence had been used and the citations not then checked then that would amount to negligence. He found that the response provided to the defendant by Haringey Law Centre was in parts unprofessional, unfair, and wrong. He said that the characterisation of the error as "<em>cosmetic</em>" was grossly unprofessional. He said that there had been an obligation on both Haringey Law Centre and the <em>Ayinde</em> pupil to check that all of the statements of facts and grounds were correct and that the citation of five fake citations was clearly professional misconduct. He said that Haringey Law Centre and the <em>Ayinde</em> pupil should have reported themselves to their respective regulators.</p>
<p>Ritchie J ordered each of the <em>Ayinde</em> pupil and Haringey Law Centre to pay £2,000 to the defendant and required the matter to be reported to the SRA and the BSB. Then on 9 May 2025 he referred the matter to the <em>Hamid</em> judge.</p>
<p>In the <em>Hamid </em>referral the <em>Ayinde</em> pupil said that she undertook her research from electronic sources. She said that those included searches on "<em>Google or Safari</em>" and that she may have inadvertently taken account of artificial intelligence generated summaries of the results. She accepted that she acted negligently and unreasonably but denied that she had acted improperly. She relied amongst other things on the paucity of supervision of her work from her chambers. She also disclosed another instance of putting false material before the court resulting in the judge writing to her head of chambers and raising the issue of a referral to the BSB. The judge was given sufficient assurances not to make such a referral. In the light of her reliance on her training and supervision her chambers was invited to make representations at the hearing. A representative of chambers sent an email after the hearing denying the allegations of inadequate supervision.</p>
<p>The solicitor and (non-qualified) paralegal at Haringey Law Centre both apologised to the court. The solicitor pointed out that Haringey Law Centre was a charity that operated with minimal funding and relied heavily on specialist counsel. He pointed out that it had not occurred to them that they would need to check the authorities relied on by counsel and had never done so. They had instructed the <em>Ayinde</em> pupil to prepare the response to the defendant and had not appreciated from that response that the cases did not exist; they thought that there were minor errors in the citations. He subsequently instructed his colleagues that all citations from counsel needed to be checked.</p>
<p>Since the hearing before Ritchie J privilege had been waived and more of the relevant communications were before the court.</p>
<p>The court found that the paralegal was not at fault in any way. She had acted entirely in accordance with what she was told to do by the solicitor and the <em>Ayinde</em> counsel.</p>
<p>The court observed that no evidence had been provided that the fake cases could have been generated in the manner contended for by the <em>Ayinde</em> pupil. It rejected her explanation as to how it had happened. It said that on the evidence before it there were two possibilities – either she had deliberately included fake citations or she did use generative artificial intelligence and had lied about that in her explanation. Both of these would amount to a contempt of court and the threshold for initiating contempt proceedings was met.</p>
<p>Despite this, the court decided against doing so. The court took into account various factors in coming to that conclusion. The first was the difficulties in determining various of the factual issues in summary contempt proceedings. The second was the potential failings in relation to her training which could not be determined in such proceedings. The third was the public admonition that she had already received and the reference to and investigation by her regulator. The fourth was the fact that she was a very junior lawyer operating outside her level of competence. The fifth was the court's main priority of ensuring that lawyers understood the consequences of using artificial intelligence.</p>
<p>The court emphasised that the decision was not a precedent and that lawyers that do not comply with their obligations in relation to artificial intelligence risk severe sanction. It also made its own additional reference to the BSB to investigate amongst other things the other instance of false cases being advanced and her subsequent explanations given to the court, the circumstances in which her list of cases came to be deleted from her computer, and whether those responsible for her supervision in chambers had complied with their obligations.</p>
<p>The court found that the solicitor had not deliberately advanced false cases and so there was no question of initiating contempt proceedings. But it was critical of the steps taken to respond to the defendant once the position had become clear. It accepted that Haringey Law Centre was an overstretched charity providing a valuable service to vulnerable members of society with limited resources. However, it found that that made it all the more important to adhere to professional standards and instruct others that adhere to them. It made an additional reference to the SRA in relation to (a) the solicitor's response to the defendant and (b) the steps taken to ascertain the competence and experience of the <em>Ayinde</em> counsel.</p>
<h4><span style="font-size: 1.11111em;">The</span><em style="font-size: 1.11111em;"> Al-Haroun </em><span style="font-size: 1.11111em;">Case</span></h4>
<p><em></em>The <em>Al-Haroun</em> case concerns an alleged breach of a financing agreement by two banks which sought to dispute the court's jurisdiction and strike out the claims. The claimant applied to set aside an order giving the defendants additional time to serve evidence in support of the application. The claimant's lawyers filed a witness statement made by the claimant himself and the solicitor. The solicitor's witness statement and correspondence with the court relied on forty-five cases. Eighteen of these did not exist. Of those that did exist, many of them did not support the proposition asserted and many passages quoted from them did not exist.</p>
<p>Dias J dismissed the application and made a <em>Hamid</em> referral in relation to the conduct of the solicitor. She described the matter as being of the "<em>utmost seriousness</em>" and said "<em>Putting before the court supposed “authorities” which do not in fact exist, or which are not authority for the propositions relied upon is prima facie only explicable as either a conscious attempt to mislead or an unacceptable failure to exercise reasonable diligence to verify the material relied upon.</em>". She noted the importance to the administration of justice of courts being able to rely on the professionalism and integrity of those who appear before it.</p>
<p>The claimant accepted responsibility for the inaccurate and fictitious material in his own witness statement saying that it came from publicly available generative artificial intelligence tools. He did not intend to mislead the court or his solicitor or the defendants and apologised. He sought to absolve his solicitor from responsibility for it.</p>
<p>The solicitor accepted that his witness statement contained non existent cases and explained that he had relied on research undertaken by the claimant himself. He accepted that this was wrong, said that he had no intention of misleading the court and had reported himself to the SRA. He apologised to the court and said that he had removed himself from litigation matters and was undertaking appropriate training. The firm itself accepted that its conduct "<em>could not be worse</em>" and that the "<em>very last thing that a solicitor should do is rely on the research of a lay client</em>". The firm contended that no further action was necessary from the court in circumstances where the error was not deliberate, there had been a report to the SRA, and counsel had not drawn attention to the point. (Counsel had reviewed the material, advised adversely on the merits but took no further part in the application.)</p>
<p>The court found that the claimant's own acceptance of responsibility did not absolve the lawyers; it was extraordinary that the lawyer was relying on the legal research of the client. There was scope for argument as to whether counsel should have spotted the issue but the incomplete evidence on the point meant that it would not be appropriate for it to refer the point the BSB. That did not prevent the solicitor raising the point in mitigation and/or making a complaint himself.</p>
<p>The court accepted that the solicitor did not deliberately mislead the court but held that a solicitor is not entitled to rely on the accuracy of citations of authorities or quotations provided by the lay client; it is the solicitor's duty to check the accuracy of the material himself. The court did not consider that the threshold for initiating contempt proceedings was met. It referred the solicitor's conduct to the SRA (in addition to the solicitor's own self-report).</p>
<h4>Discussion</h4>
<p>The court was extremely concerned about these instances of fictitious and inaccurate material being advanced to it and the opponents. The focus of the court's attention was aimed at preventing damage to the administration of justice and the public confidence in the judicial system. There were no initiations of contempt proceedings but the position would have been different if the errors were deliberate. And in each case the court noted the existence of regulatory investigations and supplemented the reporting in both cases.</p>
<p>The call for representative bodies and/or regulators to review the current guidance and frameworks will no doubt result in those bodies undertaking such a review. The issue raised in the judgment is a relatively straightforward point about the limitations of generative artificial intelligence, the responsibilities of lawyers and the damage to the administration of justice. However, it touches on a number of different areas of regulation and legal practice including supervision, ethics, training, and responsibility for work.</p>
<p>Regulators are likely to focus in particular on training and supervision within firms and firms will wish to ensure that litigators are appropriately trained and supervised and that they are able to evidence that training. The training arising from the use of publicly available generative artificial intelligence is relatively straightforward but clearly very important. Beyond that, training and supervision in other emerging technologies is likely to be tailored towards the products used by each firm and the risks associated with them.</p>
<p>Firms will also wish to consider carefully the degree to which they are able to rely on information from other sources and mitigate the risk of wrong or inaccurate generative artificial intelligence infecting material which they submit to the court and/or opponents. The court in <em>Ayinde</em> highlighted a need to ensure that counsel instructed were sufficiently experienced and competent. In addition, it was suggested that there was an obligation on the solicitor to check the authorities relied on by counsel and presumably any quotations from them. This may be a surprising contention in some contexts of legal practice. Indeed, many clients may be surprised to have to pay for that degree of cross-checking. Equally, although the court in <em>Ayinde</em> pointed out that Haringey Law Centre had the benefit of legal aid funding it would be surprising if it extended to cross-checking authorities and quotations provided by counsel. Some firms may now feel the need to consider requiring counsel to warrant in their contractual terms the existence of the authorities that they rely on and provide an indemnity. In other solicitor and barrister relationships this is unlikely to be an issue. The fatal reputational damage (whether public or otherwise) from reliance on a non-existent case is likely to be incentive enough for most counsel.</p>
<p>Nonetheless, the regulatory obligation not to mislead the court or others (at rule 1.4) continues to subsist as a regulatory obligation and the decision of Mrs Justice Lange DBE in <em>Solicitors Regulation Authority Limited v Dentons UK and Middle East LLP </em>[2025] EWHC 535 (Admin) highlights an uncertainty as to whether a culpability or seriousness threshold will apply in that or the other applicable rules and principles.</p>
<p>In this case the court highlighted authoritative sources as "<em>the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers</em>". However, that was in relation to case law. Solicitors, barristers and courts also rely on edited sources of legal research and errors can occur in practitioners' manuals and headnotes. Those may be relatively rare but firms may find that artificial intelligence begins to become incorporated into those more conventional edited sources of research. This may be something that firms will need to be alert to in the future in addition to the specific risks associated with particular technology products that they use.</p>
<p>The SRA's current three year corporate strategy highlights its desire to support innovation and technology. It considers that there is a public expectation on it to deepen its work in this area amongst other things to develop "<em>appropriate technology solutions that can help the public, including vulnerable and marginalised consumers, to access legal services</em>". A particular aim is to support small firms. That strategy is aimed at technology beyond the mere use of publicly available generative artificial intelligence. But it does highlight a potential area of tension between the effective administration and the desire to further access to justice through the use of technology. This is not the last case that will emerge from the use of emerging technologies. The different types of emerging technologies and use cases will generate much more complex issues than this case in the future.</p>
<hr />
<p><span>[1]</span> <em>R (on the application of Frederick Ayinde) v The London Borough of Haringey</em> AC-2024-LON-003062 <em>and Hamad Al-Haroun v Qatar National Bank QPSC and another</em> CL-2024-000435</p>
<p> <span>[2]</span><span> <em>R (Hamid) v Secretary of State for the Home Department</em> [2012] EWHC 3070 (Admin) [2013] CP Rep 6, <em>R (DVP) v Secretary of State for the Home Department</em> [2021] EWHC 606 (Admin) [2021] 4 WLR 75. </span></p>]]></content:encoded></item><item><guid isPermaLink="false">{BC978744-CE05-47C6-BF5A-2CCD4DD4E4D0}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-3-ai-regulation-in-the-us/</link><title>Part 3 - AI regulation in the US</title><description><![CDATA[<p><em>This is Part 3 of 'Regulation of AI</em></p>
<p>The American approach to AI regulation has changed significantly with the new Trump administration. President Biden had signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in October 2023. In January 2025, President Trump revoked President Biden's Order and signed an Executive Order on Removing Barriers to American Leadership in Artificial Intelligence (the Trump Order). </p>
<p>The Trump Order is framed as eliminating unnecessarily burdensome requirements put in place by the Biden Order that hindered the US' ability to innovate and requires US departments to rescind any policies and actions taken under the Biden Order that are "inconsistent with enhancing America's leadership in AI". The Trump Order also calls for the development of an AI action plan within 180 days. </p>
<p>Federal agencies such as the National Institute of Standards and Technology (NIST) has produced guidance on AI including the AI Risk Management Framework (AI RMF 1.0), for organisations designing, developing, deploying, or using AI systems. It is unclear to what extent NIST will continue these activities under the Trump Order. </p>
<p>Recently, Trump has also proposed the One Big Beautiful Bill act, a budget reconciliation bill that includes a 10-year moratorium on enforcing state-level regulation on AI.</p>
<p>Several states have passed legislation to regulate AI. In California, Assembly Bill 2013 (regarding training data transparency) and Senate Bill 942 (regarding transparency around AI-generated content) have been signed and both come into effect in 2026. In Colorado, Senate Bill 24-205 (regarding consumer protection in interactions with AI) was passed in May 2024. Enforcement of these laws will be paused if the One Big Beautiful Bill act is passed.</p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></description><pubDate>Mon, 03 Mar 2025 15:21:00 Z</pubDate><category>Artificial intelligence</category><authors:names>Caroline Tuck, Ricky Cella</authors:names><content:encoded><![CDATA[<p><em>This is Part 3 of 'Regulation of AI</em></p>
<p>The American approach to AI regulation has changed significantly with the new Trump administration. President Biden had signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in October 2023. In January 2025, President Trump revoked President Biden's Order and signed an Executive Order on Removing Barriers to American Leadership in Artificial Intelligence (the Trump Order). </p>
<p>The Trump Order is framed as eliminating unnecessarily burdensome requirements put in place by the Biden Order that hindered the US' ability to innovate and requires US departments to rescind any policies and actions taken under the Biden Order that are "inconsistent with enhancing America's leadership in AI". The Trump Order also calls for the development of an AI action plan within 180 days. </p>
<p>Federal agencies such as the National Institute of Standards and Technology (NIST) has produced guidance on AI including the AI Risk Management Framework (AI RMF 1.0), for organisations designing, developing, deploying, or using AI systems. It is unclear to what extent NIST will continue these activities under the Trump Order. </p>
<p>Recently, Trump has also proposed the One Big Beautiful Bill act, a budget reconciliation bill that includes a 10-year moratorium on enforcing state-level regulation on AI.</p>
<p>Several states have passed legislation to regulate AI. In California, Assembly Bill 2013 (regarding training data transparency) and Senate Bill 942 (regarding transparency around AI-generated content) have been signed and both come into effect in 2026. In Colorado, Senate Bill 24-205 (regarding consumer protection in interactions with AI) was passed in May 2024. Enforcement of these laws will be paused if the One Big Beautiful Bill act is passed.</p>
<p><em><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{F9A621A7-827B-42D5-B719-8EE08702802A}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-2-ai-regulation-in-the-eu/</link><title>Part 2 - AI regulation in the EU</title><description><![CDATA[<p><em>This is Part 2 of 'Regulation of AI'</em></p>
<p><span>AI regulation in the EU has been codified under the EU AI Act, Regulation 2024/1689. Key details are set out below.</span></p>
<table border="1" cellspacing="0" cellpadding="0" style="border-style: none; border-color: initial; border-image: initial;">
    <tbody>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-style: solid; border-width: 1pt; text-align: left;">
            <p><span style="text-decoration: underline;">Overview</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-left: none; border-top-style: solid; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act is based on a risk framework. The intention is to achieve proportionality by setting the regulation according to the potential risk AI can generate to health, safety, fundamental rights or the environment.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">What does it apply to?</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act applies to AI systems defined as "<em>a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate output such as predictions, content, recommendations, or decisions that can influence physical or virtual environments</em>".</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Who does it apply to?</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the EU market or its use affects people located in the EU. It covers all entities within the AI value chain from providers through importers and distributors to deployers.</span></p>
            <ul style="list-style-type: disc;">
                <li><span>A 'provider' is anyone who develops an AI system or that has an AI system developed and places it on the market or puts the AI system into service under its own name or trade mark, whether for payment or free of charge.</span>
                <p> </p>
                </li>
                <li><span>A 'deployer' is anyone who uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.</span></li>
            </ul>
            <p><span>Most of the onus for compliance will fall on providers of the AI systems. However, a deployer may be subject to the obligations on a provider if:</span></p>
            <ul style="list-style-type: disc;">
                <li><span>the deployer puts their name or trade mark on the AI system – for example if an organisation procured a third party white label chatbot that is then branded to look like the organisation's own chatbot</span></li>
                <li><span>the deployer substantially modifies an AI system </span></li>
                <li><span>the deployer uses the AI system for a high-risk purpose not foreseen by the provider.</span></li>
            </ul>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Risk-based approach</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act takes a risk-based approach. Unacceptable risks are prohibited while those considered high risk are only permitted if they comply with certain mandatory requirements. AI systems which are neither or "limited risk" are subject to transparency requirements and providers are encouraged to create and comply with codes of conduct that adapt the high-risk AI system requirements for these lower risk use cases. See diagram below for an example of the types of AI systems that would fall within each level of risk and the implications.</span></p>
              <img alt="" src="/-/media/rpc/images/artificial-intelligence/picture1.png?rev=243127a0122347c9bfe4021ffba004e7&hash=3B5342B790F52892EF9B5AEBF82A1769" style="height:533px; width:507px;" /><img alt="" />
            <p><span>For most businesses, AI systems that they intend on using are unlikely to fall within the high risk category. One exception is AI systems for recruitment which are currently readily available on the market and which many businesses are exploring. More likely to be relevant to businesses are limited risk AI systems as these cover chatbots which interact directly with people.<strong><span> </span></strong></span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Compliance</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>Some of the obligations under the EU AI Act are vague or subject to interpretation. To address issues with implementation, the AI Act also provides for the development of harmonised standards by European standardisation organisations to flesh out requirements under the law. Organisations that comply with these standards will enjoy a legal presumption of conformity with certain elements of the AI Act.</span></p>
            <p><span>Separately, some guidance has already been published under the AI Act. In February 2025, the European Commission published draft guidelines on prohibited AI practices and, separately, draft guidelines on AI systems, in each case as defined under the AI Act. </span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">GPAI requirements</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>There are additional obligations that apply to general purpose AI models (GPAI models) such as ChatGPT. Providers of GPAI must produce technical documentation to show how the AI model operates and provide information to the public about the datasets the model is trained on. Providers must also produce policies to ensure that EU copyright rules are followed. Additional rules apply where the GPAI is considered to pose systemic risks.A code of practice for GPAI is currently being produced by the European Commission.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">AI literacy</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>Article 4 requires all businesses in scope of the EU AI Act (whether provider or deployer) to take measures to ensure a sufficient level of AI literacy in their staff irrespective of the level of risk of the AI system. The EU AI Act does not prescribe how businesses should train their staff. Article 4 is intended to apply proportionately (e.g. depending on the staff and the context AI is used in). Ultimately, training should allow businesses to make informed decisions about AI deployment. The European Commission has produced a </span><a href="https://digital-strategy.ec.europa.eu/en/faqs/ai-literacy-questions-answers"><span>Q&A on AI literacy</span></a><span> and the EU AI Office has also started a </span><a href="https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy"><span>living repository</span></a><span> to provide businesses with good examples of AI literacy practices.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Enforcement</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>Non-compliance with certain requirements of the EU AI Act can lead to significant monetary penalties i.e. fines of up to the higher of €7.5m and 1% of global turnover, €15m and 3% of global turnover, or €35m and 7% of global turnover depending on the type of infringement. However, EU Member States must set the specific rules on penalties and enforcement measures in line with the EU AI Act and any future guidance.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">When do I need to comply?</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The EU AI Act came into force across all 27 EU member states on 1 August 2024. The Act provides for most requirements to apply from 2 August 2026, with certain provisions applying earlier e.g. the prohibition on use of banned AI technologies and the AI literacy requirement were effective from 2 February 2025. However, there has since been discussion as to whether implementation should be paused to allow businesses to ready themselves for compliance.</span></p>
            </td>
        </tr>
    </tbody>
</table>
<p><em><span><br />
Discover more insights on the <a href="/ai-guide/">AI guide</a></span></em></p>
<br />]]></description><pubDate>Mon, 03 Mar 2025 15:16:00 Z</pubDate><category>Artificial intelligence</category><authors:names>Caroline Tuck, Ricky Cella</authors:names><content:encoded><![CDATA[<p><em>This is Part 2 of 'Regulation of AI'</em></p>
<p><span>AI regulation in the EU has been codified under the EU AI Act, Regulation 2024/1689. Key details are set out below.</span></p>
<table border="1" cellspacing="0" cellpadding="0" style="border-style: none; border-color: initial; border-image: initial;">
    <tbody>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-style: solid; border-width: 1pt; text-align: left;">
            <p><span style="text-decoration: underline;">Overview</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-left: none; border-top-style: solid; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act is based on a risk framework. The intention is to achieve proportionality by setting the regulation according to the potential risk AI can generate to health, safety, fundamental rights or the environment.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">What does it apply to?</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act applies to AI systems defined as "<em>a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate output such as predictions, content, recommendations, or decisions that can influence physical or virtual environments</em>".</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Who does it apply to?</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the EU market or its use affects people located in the EU. It covers all entities within the AI value chain from providers through importers and distributors to deployers.</span></p>
            <ul style="list-style-type: disc;">
                <li><span>A 'provider' is anyone who develops an AI system or that has an AI system developed and places it on the market or puts the AI system into service under its own name or trade mark, whether for payment or free of charge.</span>
                <p> </p>
                </li>
                <li><span>A 'deployer' is anyone who uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.</span></li>
            </ul>
            <p><span>Most of the onus for compliance will fall on providers of the AI systems. However, a deployer may be subject to the obligations on a provider if:</span></p>
            <ul style="list-style-type: disc;">
                <li><span>the deployer puts their name or trade mark on the AI system – for example if an organisation procured a third party white label chatbot that is then branded to look like the organisation's own chatbot</span></li>
                <li><span>the deployer substantially modifies an AI system </span></li>
                <li><span>the deployer uses the AI system for a high-risk purpose not foreseen by the provider.</span></li>
            </ul>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Risk-based approach</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The AI Act takes a risk-based approach. Unacceptable risks are prohibited while those considered high risk are only permitted if they comply with certain mandatory requirements. AI systems which are neither or "limited risk" are subject to transparency requirements and providers are encouraged to create and comply with codes of conduct that adapt the high-risk AI system requirements for these lower risk use cases. See diagram below for an example of the types of AI systems that would fall within each level of risk and the implications.</span></p>
              <img alt="" src="/-/media/rpc/images/artificial-intelligence/picture1.png?rev=243127a0122347c9bfe4021ffba004e7&hash=3B5342B790F52892EF9B5AEBF82A1769" style="height:533px; width:507px;" /><img alt="" />
            <p><span>For most businesses, AI systems that they intend on using are unlikely to fall within the high risk category. One exception is AI systems for recruitment which are currently readily available on the market and which many businesses are exploring. More likely to be relevant to businesses are limited risk AI systems as these cover chatbots which interact directly with people.<strong><span> </span></strong></span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Compliance</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>Some of the obligations under the EU AI Act are vague or subject to interpretation. To address issues with implementation, the AI Act also provides for the development of harmonised standards by European standardisation organisations to flesh out requirements under the law. Organisations that comply with these standards will enjoy a legal presumption of conformity with certain elements of the AI Act.</span></p>
            <p><span>Separately, some guidance has already been published under the AI Act. In February 2025, the European Commission published draft guidelines on prohibited AI practices and, separately, draft guidelines on AI systems, in each case as defined under the AI Act. </span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">GPAI requirements</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>There are additional obligations that apply to general purpose AI models (GPAI models) such as ChatGPT. Providers of GPAI must produce technical documentation to show how the AI model operates and provide information to the public about the datasets the model is trained on. Providers must also produce policies to ensure that EU copyright rules are followed. Additional rules apply where the GPAI is considered to pose systemic risks.A code of practice for GPAI is currently being produced by the European Commission.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">AI literacy</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>Article 4 requires all businesses in scope of the EU AI Act (whether provider or deployer) to take measures to ensure a sufficient level of AI literacy in their staff irrespective of the level of risk of the AI system. The EU AI Act does not prescribe how businesses should train their staff. Article 4 is intended to apply proportionately (e.g. depending on the staff and the context AI is used in). Ultimately, training should allow businesses to make informed decisions about AI deployment. The European Commission has produced a </span><a href="https://digital-strategy.ec.europa.eu/en/faqs/ai-literacy-questions-answers"><span>Q&A on AI literacy</span></a><span> and the EU AI Office has also started a </span><a href="https://digital-strategy.ec.europa.eu/en/library/living-repository-foster-learning-and-exchange-ai-literacy"><span>living repository</span></a><span> to provide businesses with good examples of AI literacy practices.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">Enforcement</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>Non-compliance with certain requirements of the EU AI Act can lead to significant monetary penalties i.e. fines of up to the higher of €7.5m and 1% of global turnover, €15m and 3% of global turnover, or €35m and 7% of global turnover depending on the type of infringement. However, EU Member States must set the specific rules on penalties and enforcement measures in line with the EU AI Act and any future guidance.</span></p>
            </td>
        </tr>
        <tr>
            <td valign="top" style="width: 70.65pt; padding: 0cm 5.4pt; border-top: none; border-right-style: solid; border-bottom-style: solid; border-left-style: solid; text-align: left;">
            <p><span style="text-decoration: underline;">When do I need to comply?</span></p>
            </td>
            <td valign="top" style="width: 410.8pt; padding: 0cm 5.4pt; border-top: none; border-left: none; border-right-style: solid; border-bottom-style: solid; text-align: left;">
            <p><span>The EU AI Act came into force across all 27 EU member states on 1 August 2024. The Act provides for most requirements to apply from 2 August 2026, with certain provisions applying earlier e.g. the prohibition on use of banned AI technologies and the AI literacy requirement were effective from 2 February 2025. However, there has since been discussion as to whether implementation should be paused to allow businesses to ready themselves for compliance.</span></p>
            </td>
        </tr>
    </tbody>
</table>
<p><em><span><br />
Discover more insights on the <a href="/ai-guide/">AI guide</a></span></em></p>
<br />]]></content:encoded></item><item><guid isPermaLink="false">{EE5F0ACC-4DE7-4303-9A7E-5F32636C2AF4}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-guide/part-4-ai-regulation-in-asia/</link><title>Part 4 – AI Regulation in Asia</title><description><![CDATA[<p><em>This is Part 4 of 'Regulation of AI </em></p>
<p><span>While much of Asia takes a light touch and voluntary approach to AI regulation, some jurisdictions like China have taken a more prescriptive approach. This section provides a flavour of the diverse regulatory approaches across Asia.</span></p>
<p><span style="text-decoration: underline;"><strong>Singapore</strong></span></p>
<p><span>While existing laws such as the Personal Data Protection Act 2012 govern specific aspects of AI in Singapore, there is currently no overarching legislation regulating AI. Instead, a series of frameworks have been launched which provide general guidance to interested parties on the subject but have no legally binding effect. This soft touch approach is intended to encourage the use of AI in accordance with Singapore's National AI Strategy, first published in 2019 and updated in 2023.</span></p>
<p><span>Singapore first launched its Model AI Governance Framework in 2019 and updated it again in 2020. The framework follows two fundamental principles: that use of AI in the decision-making process should be explainable, transparent and fair; and that AI systems should be human-centric.</span></p>
<p><span>In 2022 after the EU announced the then draft EU AI Act, Singapore launched AI Verify, an open source AI governance testing framework and software toolkit that validates the performance of AI systems against a set of eleven internationally recognised AI ethics and governance principles through standardised tests, and is consistent with AI governance frameworks such as those from EU, OECD and Singapore. The principle of transparency requires that appropriate information is provided to individuals impacted by AI systems, and this is assessed by way of process checks of documentary evidence (e.g. company policy and communication collaterals) providing appropriate information to individuals who may be impacted by the AI system. Such information might include the use of AI in the system, its intended use, limitations, and risk assessments. Singapore also, in mid 2023, published Advisory Guidelines on the Use of Personal Data for AI. </span></p>
<p><span>In February 2024, Singapore also unveiled a new draft framework specifically targeted at generative AI. The concerns that the framework seeks to address include hallucinations, copyright infringement and value alignment. </span><span></span><span>While the draft generative AI framework does not provide any direct solutions to these issues, it recognises the need for stakeholders at all levels to cooperate and work together throughout the process of AI model development, implementation and deployment, and proposes nine dimensions involving the use of both <em>ex ante</em></span><span> and <em>ex post </em>measures</span><span> to foster a trusted ecosystem both at the governmental and the organisation levels. The nine dimensions proposed are accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment research and development, and AI for public good.</span></p>
<p><span>In addition to the broader national frameworks, sector-specific regulators have been developing frameworks that are applicable to narrower audiences. The Monetary Authority of Singapore have launched a project with industry partners to develop a risk framework for the use of generative AI in financial sectors, in addition to the guidance on principles of fairness, ethics, accountability and transparency in the use of AI and data analytics in Singapore's financial sector released in late 2018. More recently on 1 March 2024, the Singapore Personal Data Protection Commission (<strong>PDPC</strong>) issued a set of Advisory Guidelines on the use of personal data in AI recommendation and decision systems, applicable to third party developers of bespoke AI systems. The guidelines clarify how Singapore's data protection laws apply when organisations use personal data to develop and train AI systems, and set out baseline guidance and best practices for organisations to adopt. While the guidelines do not themselves have legal effect, they indicate the manner in which the PDPC will interpret provisions of Singapore's personal data protection laws.</span></p>
<p><span><strong>Hong Kong</strong></span></p>
<p><span>In Hong Kong, there is similarly no overarching regulation specifically governing AI, although aspects of AI may be regulated by existing laws. A number of guidance notes have been published by government bodies to govern and facilitate the ethical use of AI technology. For example, to assist organisations in complying with the Personal Data (Privacy) Ordinance to protect personal data, the Office of the Privacy Commissioner for Personal Data has published the Artificial Intelligence: Model Personal Data Protection Framework (<strong>Model Framework</strong>) in June 2024, and Guidance on the Ethical Development of AI in Hong Kong (the <strong>2021 Guidance</strong>) in 2021. The Model Framework provides organisations with practical recommendations and best practices in the procurement, implementation and use of AI, while the 2021 Guidance focuses more on broad ethical principles for organisations to consider when developing and deploying AI involving personal data. </span></p>
<p><span>In August 2023, the Office of the Government Chief Information Officer published the Ethical Artificial Intelligence Framework (<strong>Ethical AI Framework</strong>). The Ethical AI Framework was originally developed for internal adoption within the Hong Kong government to assist with planning, designing and implementing AI and big data analytics in IT projects or services. It is now also available for general reference by organisations when adopting AI and big data analytics in IT projects. </span></p>
<p><span>In July 2024, the Hong Kong Intellectual Property Department released a consultation paper to modernise the Hong Kong Copyright Ordinance to keep pace with the rapid development and prevalence of AI, to ensure that Hong Kong's copyright regime remains robust and competitive.</span></p>
<p><span><strong>China</strong></span></p>
<p><span>China is a frontrunner in the regulation of AI in Asia. While there is currently no general or overarching AI law in China, the regulators (the Cybersecurity Administration of China in particular) have in recent years introduced mandatory technology-specific regulations and measures to address the risks associated with different aspects of AI. Unlike the frameworks in Singapore, these regulations and measures have legal effect. They include but are not limited to the following:</span></p>
<ul style="list-style-type: disc;">
    <li><strong><span>Provisions on the Administration of Algorithmic Recommendations for Internet Information Services (effective 1 March 2022):</span></strong><span> These provisions specifically apply to services that push content or make recommendations to users via algorithms, such as Douyin (TikTok in other countries). Under these regulations, a user must be provided with a choice to not be targeted based on the user’s individual characteristics, such as demographic or location information. Algorithmic recommendation service providers are also prohibited from pushing content to minors that may be harmful to one’s health or violate social morality, such as alcohol or tobacco, and from setting up algorithmic models that induce users to indulge in addiction or excessive consumption;</span><strong><span></span></strong></li>
    <li><strong><span>Provisions on the Administration of Deep Synthesis of Internet Information Services (effective 10 January 2023):</span></strong><span> These provisions regulate deep synthesis technology such as deep machine learning algorithms that have the ability to create generated content such as deepfakes, and regulates providers and users of such technology as well as platforms distributing such applications. Key obligations include requirements imposed on providers relating to security assessments, user verification, as well as the requirement to report any use of the technology by users to create harmful or undesirable content. The provisions also require the providers of deep synthesis technology to label AI-generated or edited content (such as images and videos) with a noticeable mark to inform users and the public of the nature and origin of such generated content.</span></li>
    <li><strong><span>Interim Measures for the Management of Generative AI Services (effective 15 August 2023):</span></strong><span> These measures apply to generative AI technology and services in China that have the ability to generate content such as texts, images, audio, or video, and have extraterritorial effect in respect of the provision of generative AI services that originate outside of China. The measures place strong emphasis on the transparency, quality and legitimacy of both the training data and the generated content. Generative AI service providers are required to respect intellectual property rights and only use training data and foundational models that have lawful sources. Service providers are also subject to content moderation requirements to address illegal content and illegal use of their services, such as the removal of such illegal content and suspension of provision of services to users in violation, and are required to report such illegal content or use to the relevant authorities. Service providers are also required to employ effective measures to increase the quality, accuracy and reliability of both training data and AI generated content, and to establish convenient and transparent portals for complaints and reports from the public. Other measures to protect minors and the confidentiality of users' input data are also imposed. On 29 February 2024, the National Information Security Standardization Technical Committee (TC260) (the leading standards body for digital technologies) has issued the TC260-003 Basic security requirements for generative AI service to provide organisation with practical guidance on compliance with these interim measures. </span></li>
</ul>
<p><span>Certain higher risk service providers of AI systems are also subject to heightened regulatory compliance obligations (including the carrying out of security assessments and algorithm filings) as a result of their ability to disseminate information to a large groups of individuals. The current AI governance regime in China appears to target specific issues arising from AI technology while still promoting the development of AI in all industries and fields, with the burden of regulatory compliance placed largely on </span><span>AI service providers as the gatekeepers for the security and quality of their AI services.</span><span> This is especially evident from the TC260-003 Basic Security Requirements for Generative AI Service which set out comprehensive requirements for service providers to follow when conducting security assessments.</span></p>
<p><span>The current regulatory approach in China can be contrasted with that of the EU. The EU AI Act, meant to be a prescriptive and overarching piece of legislation, </span><span>prescribes risk classification for AI systems and imposes maximum fines for different aspects of non-compliance. In contrast, the existing regulations and measures in China target specific AI technologies instead of introducing risk-based classification and regulation of AI services, and prescribes that violations may be prosecuted in accordance with public security and criminal laws.</span></p>
<p><span>Looking ahead, more comprehensive legislation to regulate AI is expected to be introduced in mainland China. AI service providers active in mainland China should take steps to comply with current regulations where applicable and to keep abreast of further AI regulatory developments.</span></p>
<p><span><strong>Vietnam</strong></span></p>
<p><span>On 2 July 2024, the Vietnam Ministry of Information and Communication released for public consultation a draft digital technology industry law to regulate digital technology products and services, including AI. Under the draft law, AI is proposed to be regulated in the following manner: </span></p>
<ul style="list-style-type: disc;">
    <li><span>Ethical principles for the development, deployment, and application of AI will be issued by the ministry;</span></li>
    <li><span>Digital technology products created by AI must be labelled for identification to ensure that the output of the AI systems can be recognised as artificially created or manipulated; and </span></li>
    <li><span>AI systems will be classified according to risk levels based on their impact on health, the rights and lawful interests of organisations and individuals, human safety or property, the safety of national critical information systems, and critical infrastructure. These classifications will be used to implement regulatory measures in accordance with their risk levels.</span></li>
</ul>
<p><span>The public consultation is due to end in September 2024, and comes shortly after the ministry's press conference in May 2024 where it acknowledged the significant revenue generated by V</span><span>ietnamese digital technology enterprises providing services and products to foreign markets, and the need to </span><span>accelerate the drafting and implementation of the digital technology industry law to encourage domestic digital technology firms to do business abroad.</span></p>
<p><span><strong>Taiwan</strong></span></p>
<p><span>In Taiwan, a draft AI basic law has been proposed by a private Taiwanese research foundation, namely the International Artificial Intelligence and Law Research Foundation. The draft basic law sets out fundamental principles concerning the research and use of AI, emphasises the need to protect privacy and personal data in the development and application of AI, and proposes the regulation of AI based on level of risk, similar to the draft EU AI Act and the draft US Algorithmic Accountability Act of 2022. It is expected that the draft law will be reviewed by the Taiwan Congress.</span></p>
<p><span>In the meantime, the Finance Supervisory Commission of Taiwan has released Guidelines for the use of AI in the finance industry. The guidelines, which do not have legal effect, contain provisions for the management and mitigation of risks in using AI technology, and for the establishment of a review and evaluation mechanism based on financial institutions' own professionalism and resource levels, including reviews by independent third parties with AI expertise.</span></p>
<p><span><strong>South Korea</strong></span></p>
<p><span>In February 2023, the Science, ICT, Broadcasting and Communications Committee of the South Korean National Assembly passed proposed legislation to enact the Act on Promotion of the AI Industry and Framework for Establishing Trustworthy AI (<strong>AI Bill</strong>). If the AI Bill is subsequently passed into law following final votes from the Korean National Assembly, it will be the first piece of statutory legislation to comprehensively govern and regulate the AI industry in Korea. The AI Bill incorporates seven AI-related bills introduced since 2022 and seeks to not only promote the AI industry, but also to protect users of AI-based services by fostering a more secure ecosystem through the imposition of stringent notice and certification requirements for high-risk AI services</span><span> that are used in direct connection with human life and safety. South Korea appears to have taken a supportive approach towards AI by making it a general principle in the AI Bill that AI regulations must allow anyone to develop new AI technology without having to obtain any government pre-approval.</span></p>
<p><span><strong>Japan</strong></span></p>
<p><span>It is reported that on 7 November 2023, the government of Japan set out 10 principles in draft guidelines for organisations involved with AI. The principles are based on rules agreed to by the G7 (of which Japan is a member) on generative AI and other matters via the Hiroshima AI Process. Japan has taken a highly permissive approach to the use of copyright materials for machine learning, and it will be interesting to see if it retains this line in the mid to long term.</span></p>
<p><span>Japan and the Association of Southeast Asian Nations (ASEAN) adopted a joint statement on 17 December 2023 that included a commitment to greater cooperation between the two jurisdictions on AI governance, including support for the recently published </span><a href="https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf"><span style="color: #d00571;">ASEAN Guide on AI Governance and Ethics</span></a><span> . Japan has also launched a new AI Safety Institute, which will among other things implement standards for the development of generative AI.</span></p>
<p><span><strong>ASEAN</strong></span></p>
<p><span>There have also been developments in AI regulation on a regional level. The Association of Southeast Asian Nationals ("<strong>ASEAN</strong>"), which comprises the 10 member states of Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam, has issued an </span><a href="https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf"><span>ASEAN Guide</span></a><span> to AI ethics and governance in February 2024 for AI design, development and deployment by organisations, as well as for policy formulation by governments in the region. It maps out a voluntary and light touch approach to regulating AI.  </span></p>
<p><span>The ASEAN Guide nevertheless focuses on traditional AI technologies that exclude generative AI, and is similar to Singapore's Model AI Governance Framework. It offers both national-level recommendations for its 10 member states, as well as ASEAN regional-level recommendations. Among other things, it asks companies to take countries' cultural differences into consideration and does not prescribe unacceptable risk categories, unlike the EU AI Act.</span></p>
<p> <span>With the exception of China, most countries in Asia have thus far adopted a light touch and voluntary approach towards AI regulation, with a clear intention by most Asian governments to support the development of AI industry and tools. Nevertheless, some countries, including Vietnam and South Korea, appear to be moving towards the adoption of a more prescriptive regulatory approach towards AI. It is likely that more countries will look to implement AI regulatory laws once the effects of EU AI Act are felt and assessed. </span></p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></description><pubDate>Tue, 06 Aug 2024 15:30:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Nick Lauw, Pu Fang Ching</authors:names><content:encoded><![CDATA[<p><em>This is Part 4 of 'Regulation of AI </em></p>
<p><span>While much of Asia takes a light touch and voluntary approach to AI regulation, some jurisdictions like China have taken a more prescriptive approach. This section provides a flavour of the diverse regulatory approaches across Asia.</span></p>
<p><span style="text-decoration: underline;"><strong>Singapore</strong></span></p>
<p><span>While existing laws such as the Personal Data Protection Act 2012 govern specific aspects of AI in Singapore, there is currently no overarching legislation regulating AI. Instead, a series of frameworks have been launched which provide general guidance to interested parties on the subject but have no legally binding effect. This soft touch approach is intended to encourage the use of AI in accordance with Singapore's National AI Strategy, first published in 2019 and updated in 2023.</span></p>
<p><span>Singapore first launched its Model AI Governance Framework in 2019 and updated it again in 2020. The framework follows two fundamental principles: that use of AI in the decision-making process should be explainable, transparent and fair; and that AI systems should be human-centric.</span></p>
<p><span>In 2022 after the EU announced the then draft EU AI Act, Singapore launched AI Verify, an open source AI governance testing framework and software toolkit that validates the performance of AI systems against a set of eleven internationally recognised AI ethics and governance principles through standardised tests, and is consistent with AI governance frameworks such as those from EU, OECD and Singapore. The principle of transparency requires that appropriate information is provided to individuals impacted by AI systems, and this is assessed by way of process checks of documentary evidence (e.g. company policy and communication collaterals) providing appropriate information to individuals who may be impacted by the AI system. Such information might include the use of AI in the system, its intended use, limitations, and risk assessments. Singapore also, in mid 2023, published Advisory Guidelines on the Use of Personal Data for AI. </span></p>
<p><span>In February 2024, Singapore also unveiled a new draft framework specifically targeted at generative AI. The concerns that the framework seeks to address include hallucinations, copyright infringement and value alignment. </span><span></span><span>While the draft generative AI framework does not provide any direct solutions to these issues, it recognises the need for stakeholders at all levels to cooperate and work together throughout the process of AI model development, implementation and deployment, and proposes nine dimensions involving the use of both <em>ex ante</em></span><span> and <em>ex post </em>measures</span><span> to foster a trusted ecosystem both at the governmental and the organisation levels. The nine dimensions proposed are accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment research and development, and AI for public good.</span></p>
<p><span>In addition to the broader national frameworks, sector-specific regulators have been developing frameworks that are applicable to narrower audiences. The Monetary Authority of Singapore have launched a project with industry partners to develop a risk framework for the use of generative AI in financial sectors, in addition to the guidance on principles of fairness, ethics, accountability and transparency in the use of AI and data analytics in Singapore's financial sector released in late 2018. More recently on 1 March 2024, the Singapore Personal Data Protection Commission (<strong>PDPC</strong>) issued a set of Advisory Guidelines on the use of personal data in AI recommendation and decision systems, applicable to third party developers of bespoke AI systems. The guidelines clarify how Singapore's data protection laws apply when organisations use personal data to develop and train AI systems, and set out baseline guidance and best practices for organisations to adopt. While the guidelines do not themselves have legal effect, they indicate the manner in which the PDPC will interpret provisions of Singapore's personal data protection laws.</span></p>
<p><span><strong>Hong Kong</strong></span></p>
<p><span>In Hong Kong, there is similarly no overarching regulation specifically governing AI, although aspects of AI may be regulated by existing laws. A number of guidance notes have been published by government bodies to govern and facilitate the ethical use of AI technology. For example, to assist organisations in complying with the Personal Data (Privacy) Ordinance to protect personal data, the Office of the Privacy Commissioner for Personal Data has published the Artificial Intelligence: Model Personal Data Protection Framework (<strong>Model Framework</strong>) in June 2024, and Guidance on the Ethical Development of AI in Hong Kong (the <strong>2021 Guidance</strong>) in 2021. The Model Framework provides organisations with practical recommendations and best practices in the procurement, implementation and use of AI, while the 2021 Guidance focuses more on broad ethical principles for organisations to consider when developing and deploying AI involving personal data. </span></p>
<p><span>In August 2023, the Office of the Government Chief Information Officer published the Ethical Artificial Intelligence Framework (<strong>Ethical AI Framework</strong>). The Ethical AI Framework was originally developed for internal adoption within the Hong Kong government to assist with planning, designing and implementing AI and big data analytics in IT projects or services. It is now also available for general reference by organisations when adopting AI and big data analytics in IT projects. </span></p>
<p><span>In July 2024, the Hong Kong Intellectual Property Department released a consultation paper to modernise the Hong Kong Copyright Ordinance to keep pace with the rapid development and prevalence of AI, to ensure that Hong Kong's copyright regime remains robust and competitive.</span></p>
<p><span><strong>China</strong></span></p>
<p><span>China is a frontrunner in the regulation of AI in Asia. While there is currently no general or overarching AI law in China, the regulators (the Cybersecurity Administration of China in particular) have in recent years introduced mandatory technology-specific regulations and measures to address the risks associated with different aspects of AI. Unlike the frameworks in Singapore, these regulations and measures have legal effect. They include but are not limited to the following:</span></p>
<ul style="list-style-type: disc;">
    <li><strong><span>Provisions on the Administration of Algorithmic Recommendations for Internet Information Services (effective 1 March 2022):</span></strong><span> These provisions specifically apply to services that push content or make recommendations to users via algorithms, such as Douyin (TikTok in other countries). Under these regulations, a user must be provided with a choice to not be targeted based on the user’s individual characteristics, such as demographic or location information. Algorithmic recommendation service providers are also prohibited from pushing content to minors that may be harmful to one’s health or violate social morality, such as alcohol or tobacco, and from setting up algorithmic models that induce users to indulge in addiction or excessive consumption;</span><strong><span></span></strong></li>
    <li><strong><span>Provisions on the Administration of Deep Synthesis of Internet Information Services (effective 10 January 2023):</span></strong><span> These provisions regulate deep synthesis technology such as deep machine learning algorithms that have the ability to create generated content such as deepfakes, and regulates providers and users of such technology as well as platforms distributing such applications. Key obligations include requirements imposed on providers relating to security assessments, user verification, as well as the requirement to report any use of the technology by users to create harmful or undesirable content. The provisions also require the providers of deep synthesis technology to label AI-generated or edited content (such as images and videos) with a noticeable mark to inform users and the public of the nature and origin of such generated content.</span></li>
    <li><strong><span>Interim Measures for the Management of Generative AI Services (effective 15 August 2023):</span></strong><span> These measures apply to generative AI technology and services in China that have the ability to generate content such as texts, images, audio, or video, and have extraterritorial effect in respect of the provision of generative AI services that originate outside of China. The measures place strong emphasis on the transparency, quality and legitimacy of both the training data and the generated content. Generative AI service providers are required to respect intellectual property rights and only use training data and foundational models that have lawful sources. Service providers are also subject to content moderation requirements to address illegal content and illegal use of their services, such as the removal of such illegal content and suspension of provision of services to users in violation, and are required to report such illegal content or use to the relevant authorities. Service providers are also required to employ effective measures to increase the quality, accuracy and reliability of both training data and AI generated content, and to establish convenient and transparent portals for complaints and reports from the public. Other measures to protect minors and the confidentiality of users' input data are also imposed. On 29 February 2024, the National Information Security Standardization Technical Committee (TC260) (the leading standards body for digital technologies) has issued the TC260-003 Basic security requirements for generative AI service to provide organisation with practical guidance on compliance with these interim measures. </span></li>
</ul>
<p><span>Certain higher risk service providers of AI systems are also subject to heightened regulatory compliance obligations (including the carrying out of security assessments and algorithm filings) as a result of their ability to disseminate information to a large groups of individuals. The current AI governance regime in China appears to target specific issues arising from AI technology while still promoting the development of AI in all industries and fields, with the burden of regulatory compliance placed largely on </span><span>AI service providers as the gatekeepers for the security and quality of their AI services.</span><span> This is especially evident from the TC260-003 Basic Security Requirements for Generative AI Service which set out comprehensive requirements for service providers to follow when conducting security assessments.</span></p>
<p><span>The current regulatory approach in China can be contrasted with that of the EU. The EU AI Act, meant to be a prescriptive and overarching piece of legislation, </span><span>prescribes risk classification for AI systems and imposes maximum fines for different aspects of non-compliance. In contrast, the existing regulations and measures in China target specific AI technologies instead of introducing risk-based classification and regulation of AI services, and prescribes that violations may be prosecuted in accordance with public security and criminal laws.</span></p>
<p><span>Looking ahead, more comprehensive legislation to regulate AI is expected to be introduced in mainland China. AI service providers active in mainland China should take steps to comply with current regulations where applicable and to keep abreast of further AI regulatory developments.</span></p>
<p><span><strong>Vietnam</strong></span></p>
<p><span>On 2 July 2024, the Vietnam Ministry of Information and Communication released for public consultation a draft digital technology industry law to regulate digital technology products and services, including AI. Under the draft law, AI is proposed to be regulated in the following manner: </span></p>
<ul style="list-style-type: disc;">
    <li><span>Ethical principles for the development, deployment, and application of AI will be issued by the ministry;</span></li>
    <li><span>Digital technology products created by AI must be labelled for identification to ensure that the output of the AI systems can be recognised as artificially created or manipulated; and </span></li>
    <li><span>AI systems will be classified according to risk levels based on their impact on health, the rights and lawful interests of organisations and individuals, human safety or property, the safety of national critical information systems, and critical infrastructure. These classifications will be used to implement regulatory measures in accordance with their risk levels.</span></li>
</ul>
<p><span>The public consultation is due to end in September 2024, and comes shortly after the ministry's press conference in May 2024 where it acknowledged the significant revenue generated by V</span><span>ietnamese digital technology enterprises providing services and products to foreign markets, and the need to </span><span>accelerate the drafting and implementation of the digital technology industry law to encourage domestic digital technology firms to do business abroad.</span></p>
<p><span><strong>Taiwan</strong></span></p>
<p><span>In Taiwan, a draft AI basic law has been proposed by a private Taiwanese research foundation, namely the International Artificial Intelligence and Law Research Foundation. The draft basic law sets out fundamental principles concerning the research and use of AI, emphasises the need to protect privacy and personal data in the development and application of AI, and proposes the regulation of AI based on level of risk, similar to the draft EU AI Act and the draft US Algorithmic Accountability Act of 2022. It is expected that the draft law will be reviewed by the Taiwan Congress.</span></p>
<p><span>In the meantime, the Finance Supervisory Commission of Taiwan has released Guidelines for the use of AI in the finance industry. The guidelines, which do not have legal effect, contain provisions for the management and mitigation of risks in using AI technology, and for the establishment of a review and evaluation mechanism based on financial institutions' own professionalism and resource levels, including reviews by independent third parties with AI expertise.</span></p>
<p><span><strong>South Korea</strong></span></p>
<p><span>In February 2023, the Science, ICT, Broadcasting and Communications Committee of the South Korean National Assembly passed proposed legislation to enact the Act on Promotion of the AI Industry and Framework for Establishing Trustworthy AI (<strong>AI Bill</strong>). If the AI Bill is subsequently passed into law following final votes from the Korean National Assembly, it will be the first piece of statutory legislation to comprehensively govern and regulate the AI industry in Korea. The AI Bill incorporates seven AI-related bills introduced since 2022 and seeks to not only promote the AI industry, but also to protect users of AI-based services by fostering a more secure ecosystem through the imposition of stringent notice and certification requirements for high-risk AI services</span><span> that are used in direct connection with human life and safety. South Korea appears to have taken a supportive approach towards AI by making it a general principle in the AI Bill that AI regulations must allow anyone to develop new AI technology without having to obtain any government pre-approval.</span></p>
<p><span><strong>Japan</strong></span></p>
<p><span>It is reported that on 7 November 2023, the government of Japan set out 10 principles in draft guidelines for organisations involved with AI. The principles are based on rules agreed to by the G7 (of which Japan is a member) on generative AI and other matters via the Hiroshima AI Process. Japan has taken a highly permissive approach to the use of copyright materials for machine learning, and it will be interesting to see if it retains this line in the mid to long term.</span></p>
<p><span>Japan and the Association of Southeast Asian Nations (ASEAN) adopted a joint statement on 17 December 2023 that included a commitment to greater cooperation between the two jurisdictions on AI governance, including support for the recently published </span><a href="https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf"><span style="color: #d00571;">ASEAN Guide on AI Governance and Ethics</span></a><span> . Japan has also launched a new AI Safety Institute, which will among other things implement standards for the development of generative AI.</span></p>
<p><span><strong>ASEAN</strong></span></p>
<p><span>There have also been developments in AI regulation on a regional level. The Association of Southeast Asian Nationals ("<strong>ASEAN</strong>"), which comprises the 10 member states of Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam, has issued an </span><a href="https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf"><span>ASEAN Guide</span></a><span> to AI ethics and governance in February 2024 for AI design, development and deployment by organisations, as well as for policy formulation by governments in the region. It maps out a voluntary and light touch approach to regulating AI.  </span></p>
<p><span>The ASEAN Guide nevertheless focuses on traditional AI technologies that exclude generative AI, and is similar to Singapore's Model AI Governance Framework. It offers both national-level recommendations for its 10 member states, as well as ASEAN regional-level recommendations. Among other things, it asks companies to take countries' cultural differences into consideration and does not prescribe unacceptable risk categories, unlike the EU AI Act.</span></p>
<p> <span>With the exception of China, most countries in Asia have thus far adopted a light touch and voluntary approach towards AI regulation, with a clear intention by most Asian governments to support the development of AI industry and tools. Nevertheless, some countries, including Vietnam and South Korea, appear to be moving towards the adoption of a more prescriptive regulatory approach towards AI. It is likely that more countries will look to implement AI regulatory laws once the effects of EU AI Act are felt and assessed. </span></p>
<p><em>Discover more insights on the <a href="/ai-guide/">AI guide</a></em></p>]]></content:encoded></item><item><guid isPermaLink="false">{B67BA6C6-9B78-47C6-B0B9-E8CB14B604A4}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/ai-in-auditing-embracing-a-new-age-for-the-profession/</link><title>AI in auditing: Embracing a new age for the profession</title><description><![CDATA[Artificial Intelligence (AI) is a rather new concept for many (ignoring those versed in 80’s Sci-Fi movies); it’s something many don’t know much about and certainly don’t use in our day-to-day lives (or at least appreciate we are using). However, that’s not the case for everyone. Auditors have long been reaping the benefits of AI, but are auditors just scratching the surface of what AI can offer and what impact will an increased use have on their insurance requirements and claims they face?]]></description><pubDate>Mon, 08 Jul 2024 14:30:00 +0100</pubDate><category>Artificial intelligence</category><authors:names></authors:names><enclosure url="https://www.rpclegal.com/-/media/rpc/redesign-images/thinking-tiles/wide/tech-media-1---thinking-tile-wide.jpg?rev=ee4cf7f6fb8048c5b8fbba82117fa558&amp;hash=B2A6FCC6F2975DF2B5BF91ABB37D548D" type="image/jpeg" medium="image" /><content:encoded><![CDATA[<p><em>Published by RPC with commentary from Arch Insurance</em></p>
<p><span>The origins of auditing can be traced</span><span> </span><span>back to the 18th Century. I know, a blog</span><span> </span><span>on auditing and we’re starting with a</span><span> </span><span>history lesson? But don’t stop reading,</span><span> </span><span>trust us. For a process that’s been in place</span><span> </span><span>for centuries, you might consider that</span><span> </span><span>auditors would be slow to embrace the</span><span> </span><span>opportunities offered by AI. However,</span><span> </span><span>that’s not the case. In fact, auditors (or</span><span> </span><span>certainly the larger auditing firms) have</span><span> </span><span>been making use of AI for some time.</span><span> </span><span>AI is typically used to streamline audits,</span><span> </span><span>making the entire process more efficient.</span><span> </span><span>Of course, the creation of AI products</span><span> </span><span>(such as Chat GPT) means that AI is now</span><span> </span><span>much more widely available and we’re</span><span> </span><span>anticipating that AIs use in audits will</span><span> </span><span>significantly increase as time passes and</span><span> </span><span>that most auditors will eventually make use </span>of AI to improve their processes. In what way we hear you ask? Read on to find out more.</p>
<p><strong><span>Use of AI in auditing</span></strong></p>
<p><span>The exact ways in which auditors are</span><span> </span><span>currently using AI remain largely unknown</span><span> </span><span>(until such time as firms start to reveal their</span><span> </span><span>uses). It’s anticipated though that AI will be</span><span> </span><span>used in the following</span><span> </span><span>ways:</span></p>
<ul style="list-style-type: disc;">
    <li><span>real time auditing could be</span> <span>undertaken 24/7</span></li>
    <li><span>analysis of large volumes of data for</span> <span>patterns and anomalies</span></li>
    <li><span>identifying ‘unusual’ transactions</span></li>
    <li><span>testing to consider resilience of a firm</span> <span>and predict future outcomes/whether a</span> <span>firm can remain a going concern.</span></li>
</ul>
<p><span>In short, AI could be used for some of</span><span> </span><span>the more labour-intensive tasks, freeing</span><span> </span><span>auditors for the more complex tasks.</span></p>
<p><strong><span>Advantages of using AI in auditing</span></strong></p>
<p><span>Many of the recent claims against auditors</span><span> </span><span>(or certainly the most high-profile) have</span><span> </span><span>centred around mistakes made by auditors</span><span> </span><span>in respect of accounting treatment. AI</span><span> </span><span>might have noticed, or at least limited,</span><span> </span><span>the errors that took place in these cases</span><span> </span><span>where it can run data against accounting</span><span> </span><span>standards. Equally, AI can provide more</span><span> </span><span>accurate risk assessments which could</span><span> </span><span>in turn provide a better insight into a</span><span> </span><span>company’s financial health and viability –</span><span> </span><span>meaning auditors will be able to establish</span><span> </span><span>whether a company is in financial difficulty</span><span> </span><span>earlier and easier. The use of AI is likely</span><span> </span><span>to reduce the risk of failing to spot issues</span><span> </span><span>such as fraud. For example, AI algorithms</span><span> </span><span>can review large volumes of data, thereby</span><span> </span><span>identifying patterns and anomalies – and</span><span> </span><span>as a result, potentially identify fraudulent</span><span> </span><span>activities more promptly.</span></p>
<p><span>Some claims arise as a result of auditors</span><span> </span><span>failing to ask the right questions and/</span><span> </span><span>or operating with sufficient professional</span><span> </span><span>scepticism, perhaps because the auditor</span><span> </span><span>has worked with the client for</span><span> </span><span>several</span><span> </span><span>years, or they may simply believe what they</span><span> </span><span>are told without challenge. AI may be able </span>to analyse data on a more dispassionate basis and could be useful in identifying gaps and testing answers provided.</p>
<p><span>Without wishing to oversimplify, AI has the</span><span> </span><span>potential to improve audit procedures –</span><span> </span><span>providing better insights and uncovering</span><span> </span><span>issues that may otherwise go unmissed.</span><span> </span><span>Whilst eliminating error entirely is unlikely,</span><span> </span><span>damage that may occur can be limited with</span><span> </span><span>appropriate use of AI.</span></p>
<p><strong><span>Risks of using AI in auditing</span></strong></p>
<p><span>The use of AI (unsurprisingly) is not</span><span> </span><span>without risk. There are various examples</span><span> </span><span>of problems that may arise, and the</span><span> </span><span>International Monetary Fund (IMF)</span><span> </span><span>published a report in August 2023 which</span><span> </span><span>considered the risks involved when AI is</span><span> </span><span>used in financial services. We consider</span><span> </span><span>some of these risks below.</span></p>
<p><span>Whilst AI can remove some of the risk</span><span> </span><span>of human error, it still places reliance on</span><span> </span><span>humans inputting the correct data. A</span><span> </span><span>formula/test will need to be created and</span><span> </span><span>any errors are unlikely to be picked up</span><span> </span><span>immediately – meaning the formula/test</span><span> </span><span>could be applied to a number of audits for</span><span> </span><span>different clients – creating a systemic risk.</span><span> </span><span>Put simply, there is still a reliance on the</span><span> </span><span>correct data being input initially and it’s</span><span> </span><span>perhaps more important if AI is being used.</span><span> </span><span>In the same vein, AI cannot identify if an</span><span> </span><span>answer is actually right or wrong – it can</span><span> </span><span>only confirm if the data/test has been</span><span> </span><span>applied correctly. So, whilst reliance can</span><span> </span><span>be placed on AI and the work it produces,</span><span> </span><span>there’s no guarantee it will be correct.</span></p>
<p><span>Whilst one of the benefits of AI is that it can</span><span> </span><span>do things that humans simply cannot (or</span><span> </span><span>perform the task quicker than a human),</span><span> </span><span>the risk equally is that there is a lack of</span><span> </span><span>transparency about how outcomes are</span><span> </span><span>reached. When you are unable to see how</span><span> </span><span>a decision has been made, opportunity</span><span> </span><span>for oversight is potentially lost. Auditors</span><span> </span><span>will not be immune from claims if they are</span><span> </span><span>found to have placed too much reliance</span><span> </span><span>on AI.</span></p>
<p><span>It’s possible that human bias may also filter</span><span> </span><span>through into a system’s algorithms – a</span><span> </span><span>scary thought and one that does sound like</span><span> </span><span>a bad Terminator sequel.</span><span> </span><span>There’s also an undeniable risk in respect</span><span> </span><span>of a data breach – AI models require vast</span><span> </span><span>amounts of data to run efficiently and so</span><span> </span><span>auditors will need to be careful to ensure</span><span> </span><span>that confidential data is ring fenced and</span><span> </span><span>secured. A data breach with the level of</span><span> </span><span>data contained in the AI systems could be</span><span> </span><span>catastrophic for a firm.</span></p>
<p><strong><span>Insurance implications</span></strong></p>
<p><span>The risks associated with AI could impact</span><span> </span><span>a variety of insurance policies, however</span><span> </span><span>in the context of auditing, professional</span><span> </span><span>indemnity policies are likely to be most</span><span> </span><span>impacted due to the risk of</span><span> </span><span>professional</span><span> </span><span>negligence and tort.</span></p>
<p><span>A key question for claims arising out of</span><span> </span><span>the use of AI, is whether it would be a</span><span> </span><span>data breach or an AI result causing a loss</span><span> </span><span>and</span><span> </span><span>who should be held responsible?</span><span> </span><span>Should it be the manufacturer, developer,</span><span> </span><span>the user, a mixture of all parties, or even</span><span> </span><span>someone or something else entirely?</span></p>
<p><span>In order to ascertain where the</span><span> </span><span>responsibility lies, insurers may request</span><span> </span><span>information including:</span></p>
<ul style="list-style-type: disc;">
    <li><span>whether the loss was caused as a direct</span> <span>or indirect result of the use of the</span> <span>AI system</span></li>
    <li><span>how the loss came about, for example</span> <span>as a result of user error or system</span> <span>malfunction and/or</span></li>
    <li><span>whether the loss could have</span> <span>been foreseen.</span></li>
</ul>
<p><span>Given the requirement for professionals</span><span> </span><span>providing specialist services to implement</span><span> </span><span>recognised practices and procedures,</span><span> </span><span>the Master of the Rolls, Sir Geoffrey Vos,</span><span> </span><span>recently raised a question of whether a</span><span> </span><span>business or individual could in the future</span><span> </span><span>become negligent themselves by not</span><span> </span><span>introducing AI into their practices. It’s</span><span> </span><span>certainly food for thought and in our</span><span> </span><span>litigious culture, it is possible that a claim</span><span> </span><span>may eventually arise for this very reason.</span></p>
<p><span>Missed information is often a cause of</span><span> </span><span>professional indemnity claims within</span><span> </span><span>accounting and auditing processes. For</span><span> </span><span>example, information provided to the</span><span> </span><span>advisor being overlooked due to volume</span><span> </span><span>or a misunderstanding of how principles</span><span> </span><span>are applied. Whilst AI will look to minimise </span>the number of errors that could arise, it’s possible that human error will remain a primary cause of loss, so it’s critical that businesses have adequate training and controls in place for users of AI.</p>
<p><span>As AI continues to progress, it’s likely</span><span> </span><span>that we will see further guidance and</span><span> </span><span>judgments from courts and regulators</span><span> </span><span>providing greater clarity around</span><span> </span><span>responsibility and potential redress when</span><span> </span><span>things go wrong. It is expected that</span><span> </span><span>regulators will publish further guidance in</span><span> </span><span>early 2025 which is keenly awaited.</span></p>
<p><span><em>With thanks to Amy Corke (Claims Handler at Arch Insurance) for her contribution.</em></span></p>]]></content:encoded></item><item><guid isPermaLink="false">{036D6233-EDFC-4ABA-A256-33942AB8C074}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/navigating-the-impact-of-ai-on-work-challenges-and-opportunities-and-the-human-touch/</link><title>Navigating the impact of AI on work: challenges, opportunities, and the human touch</title><description><![CDATA[The fear of job losses because of technology and automation, including artificial intelligence, has been with us since the 1960s. For some time, academics have predicted the decline of routine, rules-based and process-driven roles. ]]></description><pubDate>Wed, 20 Mar 2024 13:56:00 Z</pubDate><category>Artificial intelligence</category><authors:names>Patrick Brodie</authors:names><content:encoded><![CDATA[<p><strong>Originally published by <a href="https://www.peoplemanagement.co.uk/article/1865911/mitigating-ais-effect-mental-health">The HR World</a> on March 20, 2024.</strong></p>
<p>Indeed, research within the last decade (and especially over the last year or so) has sounded ever more loudly the call that tasks and processes will be replaced by AI. Studies suggest that with the right combination of technologies most tasks and roles are – to varying degrees – susceptible to automation.</p>
<p>Our thoughts, typically, turn to tasks that are routine and repetitive and where it would be better for technology to absorb this work, freeing people up for more challenging and rewarding jobs. However, with the rise of generative AI and increasingly sophisticated machine learning, many non-routine, creative and knowledge-based tasks – which, until recently, were seen as the preserve of humans and out of reach of the machines – will be capable of AI replication. </p>
<p>Against this backdrop of rapid faceless technological change, the absence of regulation, economic uncertainty and the apparent pursuit of profit, the fear of many (especially if a positive counter vision is not provided) is that AI is all-consuming in its ability to change lives and take jobs. The language of an existential risk is prevalent. </p>
<p>Andrew Bailey, the governor of the Bank of England, has looked to change this narrative by advancing a more optimistic outlook, observing that throughout history economies have adapted and new roles created. He might have had in mind that over the course of the second industrial revolution, new jobs emerged to replace those lost to mass production: in 1900 more than 40 per cent of the workforce was employed in agriculture; now it is 2 per cent.</p>
<p>As AI becomes an increasing feature of a company's operational capabilities, workers will want to know what this means for their future. If employees don't understand this (especially if they don't have control over its adoption and effects), then anxiety about long-term employment and economic insecurity grows. In turn, leadership teams will be worried about the mental health of their people. There will be many reasons for this, including:</p>
<ul>
    <li>If the impact of AI on an organisation is not understood by workforces, this risks building communal vulnerability with all its very human negative side effects – anxiety, fear, distraction and anger.</li>
    <li>If AI removes the routine tasks (with an opportunity, dare it be whispered, to slow down) with the consequence that roles become more complex, complicated and ever-more challenging, when does a person reflect and rest? And without that rest, how do employees keep going at this increasing pace?</li>
    <li>If companies maintain their hybrid and flexible working arrangements, supported by AI and technology (and there are good reasons that they should, but that is a discussion for another day) there is a risk of further isolation for some.</li>
</ul>
<p>Do you remember those wind-up swimmer bath toys? The mechanism was tightened and the toy was put in the water. After a minute or so, it slowed down. So, it was just wound up again and put back in the bath. This was repeated. The toy always broke. If we had been more thoughtful about the swimmer's capacity to keep going, the outcome would have been different.</p>
<p>However, there is hope. The solution is within us. Our unique human capacity for empathy, sympathy, kindness (even directness) will become more important, especially for leaders. It is leaders who must increasingly look to rely on their emotional intelligence to communicate a clear vision of the future, emphasising ambition but at the same time appreciating the concerns of their workforces.</p>]]></content:encoded></item><item><guid isPermaLink="false">{5C16C9E4-7448-405E-8675-257BA869349D}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/12-top-tips-for-using-ai-in-retail-and-consumer-businesses/</link><title>12 top tips for using AI in retail and consumer businesses</title><description><![CDATA[Last year, we set out our top ten tips for retailers entering the metaverse. This year, AI is the hot topic in retail and pretty much everywhere else! AI is redefining the retail and consumer industry. It can improve consume engagement, aid decision-making, curate tailored promotions, improve efficiencies, and reduce costs. So what do retailers and consumer bran need to be mindful of when deploying AI?]]></description><pubDate>Mon, 18 Dec 2023 16:38:00 Z</pubDate><category>Artificial intelligence</category><authors:names></authors:names><content:encoded><![CDATA[<p>Last year, we set out our <a href="/thinking/consumer-brands-and-retail/ten-tips-for-retailers-entering-the-metaverse/">top ten tips for retailers entering the metaverse</a>. This year, AI is the hot topic in retail and pretty much everywhere else! AI is redefining the retail and consumer industry. It can improve consume engagement, aid decision-making, curate tailored promotions, improve efficiencies, and reduce costs. So what do retailers and consumer brands need to be mindful of when deploying AI?</p>
<p><strong><em>Build a team of experts</em></strong></p>
<p>AI is only a tool – your business needs talented individuals to develop use and ensure AI systems function as intended, do not produce discriminatory outcomes, and achieve objectives. Any individual or team coding or training AI should be aware of biases that could be inadvertently introduced. It also helps to have external experts to advise on all aspects of AI and its use within your business.</p>
<p><em><strong>Beware the black box</strong></em></p>
<p>The inability to see how deep learning systems make their decisions is known as the “black box" problem. This opacity in decision making is problematic in several ways, including causing difficulties in diagnosing and fixing issues and its potential to reflect or amplify societal or dataset biases without the retailer deploying the AI knowing.  Businesses must be aware of how its AI systems work and how AI-assisted decisions are made; it's crucial to demonstrate to customers and regulators that AI is used responsibly and appropriately. </p>
<p><strong><em>Consider the data</em></strong></p>
<p>An AI solution is only as good as the data that trains it. Larger, high-quality data sets produce more accurate results. If you are licensing AI, question the quality and types of data the model was trained on. If you are training AI, consider if you can acquire additional data to refine the model or use synthetic or anonymised data. Ensure that you are clear on data ownership ingested by AI – is it within a ‘walled garden’ or shared with the provider’s other users? </p>
<p><strong><em>Set up guardrails   </em></strong></p>
<p>Implement policies to ensure the business uses AI appropriately (including free-to-use AI). Policies should address data management, roles and responsibilities, and any human intervention, amongst others. Customer-facing teams should be trained on procedures to deal with concerns from customers arising out of AI systems e.g. AI hallucinations or unexpected results.</p>
<p><strong><em>Define your use case</em></strong></p>
<p>Identify specific processes which are prime for AI investment and development. Bear in mind that the use of AI in high-stakes environments must be robust, fair and transparent. A clear strategy will reduce the risk of wasted costs and time and avoid reputational damage or other harm. Current areas of focus for retailers and consumer brands engaging with AI are marketing and customer engagement, logistics and supply chain.  </p>
<p><strong><em>Prioritise data privacy</em></strong></p>
<p>AI systems ingesting personal data should ensure privacy by design and default. Bake in data protection principles from the design phase and throughout the AI lifecycle. A data protection impact assessment will be required for most AI systems using personal data. Where human intervention is required (eg, to oversee automated decision-making), the AI interface should be designed to support this.</p>
<p><strong><em>Establish trust with your customers</em></strong></p>
<p>Build customer trust in any customer-facing AI by ensuring that AI systems and decisions are explainable and transparent. Be clear in your customer communications about the purpose of the AI, how it works, and the potential implications for personal data. Be aware that customer perceptions of AI vary considerably, and different demographics will respond to new AI products differently.     </p>
<p><strong><em>Protect your contractual position</em></strong></p>
<p>Ensure you have robust contracts with any AI provider that include provisions regarding confidentiality, IP ownership, and liability allocation. A strong contract governance framework will also help to identify risks early on and prevent them from becoming a greater issue. Also, future-proof contracts to allow amendments in response to developments in AI regulation. </p>
<p><strong><em>Be flexible</em></strong></p>
<p>Track the progress of each project and analyse the metrics of any AI solution. Document what worked well and what could be improved. Prepare to pivot if it appears that better efficiencies and ROI can be obtained with modified requirements.  </p>
<p><strong><em>Be clear on IP ownership </em></strong></p>
<p>IP issues need to be considered regarding the input data for training and the output of any AI system. It is currently unclear in the UK as to whether AI developers’ use of IP-protected works to train AI models is unauthorised, and therefore infringing use. When it comes to outputs, current legislation suggests two different options as to who’s the legal author: it could be the AI developer or the AI user who added the prompts. Until this is resolved by legislators, the courts or by an AI developer providing full legal and financial responsibility to users, retailers and consumer brands should ensure they have a licence to use the works and that the use licence addresses IP ownership for generated work.</p>
<p><strong><em>Stay up to date with legal developments</em></strong></p>
<p>AI regulation is in its infancy, with various approaches being taken indifferent nations. Keep up to date with legislative developments and regulatory guidance, in particular, the AI White Paper published earlier this year, the EU AI Act (currently in negotiations), and guidance from the UK Information Commissioner on the interplay between AI and GDPR. International agreements, such as the Bletchley Declaration signed at the UK’s recent AI Summit and the G7 Code of Conduct, mark increased cooperation between governments on regulating AI tools and AI safety.</p>
<p><strong><em>Wait and see</em></strong></p>
<p>There is value in keeping an eye on what your competitors are doing, as you might experiment with AI yourself. In the short term, we expect AI solution providers to continue to consolidate and refine their offerings, at which time retailers and consumer brands will likely be better equipped to integrate AI more comprehensively into their business.</p>]]></content:encoded></item><item><guid isPermaLink="false">{69BD4A07-CD4D-48EB-AC28-8B7B212F41D9}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/how-investment-ai-could-transform-financial-mis-selling-claims/</link><title>Coming to a bank near you? How "investment AI" could transform financial mis-selling claims</title><description><![CDATA[Living under a rock is probably the only way anyone might have escaped the media attention given to ChatGPT and generative AI in recent months. Beyond the (considerable) hype, this technology could have a profound impact on financial mis-selling claims where financial institutions and fund managers turn to the new technology to help them select investments and products. ]]></description><pubDate>Thu, 09 Nov 2023 11:33:00 Z</pubDate><category>Artificial intelligence</category><authors:names>Daniel Hemming</authors:names><content:encoded><![CDATA[<p>Dan Hemming and Olivia Dhein take a look at what generative AI can already do in this area and how fundamental concepts in financial mis-selling cases such as advice and misrepresentation could change in the near future. </p>
<p><em>What can generative AI achieve in finance right now?</em></p>
<p>Generative AI already has already shown promise in investment experiments. For example, the University of Oxford<sup>1</sup> published a paper which studied the performance of AI when selecting private equity funds. It found that AI achieved returns that were 5% higher per year than average funds. This comes after another experiment earlier this year where ChatGPT was persuaded (ie some of its security "guard rails" were overridden) to pick securities for an investment strategy following investing principles followed by leading funds. While only a theoretical exercise, the 38 stocks picked outperformed the UK's 10 most popular funds (including for example Vanguard and HSBC) by a very respectable 6.6%.<sup>2</sup> In this context it is worth noting that another similar experiment<sup>3</sup> had slightly less positive results, however ChatGPT still achieved a very respectable return.</p>
<p>Further, it was reported<sup>4</sup> in May 2023 that JP Morgan filed a trademark application for a new tool called "Index GPT" which will be able to select investments for customers tailored to their needs. Goldman Sachs and Morgan Stanley have also<sup>5</sup> started to test ChatGPT-style technology. </p>
<p>These examples illustrate the potential of this technology, which may well overhaul how investments are picked, particularly as it is able to digest large amounts of data and text that would be impossible for humans to do. At the moment, the products may still have flaws. But overall, whichever drawbacks may still exist in current iterations, it seems clear that the direction of travel points towards banks adopting the advantages of generative AI and its ability to independently take decisions without human guidance. Given the speed of developments in this area (Chat GPT was launched a year ago in November 2022), it seems plausible that any AI investment tool that independently develops an investment strategy could become sophisticated very quickly. This would especially be the case where banks feed it specific data sets to train the model up. </p>
<p><em>What difference could generative AI make to a financial mis-selling claim?</em></p>
<p>Given the examples above, it does not take a lot of imagination to see that financial institutions and fund managers may well use generative AI to select investments for their customers. What could possibly go wrong? The answer is that nobody knows - yet.</p>
<p>Unlike previous technology, the nature of generative AI means that humans are not programming the AI to do anything specific. Rather, the AI tool makes independent decisions based on general prompts as to what it would consider beneficial investments. In addition, the so-called AI "black box problem" means that as things stand, humans will not necessarily be able to understand how the tool selects an investment. To complicate matters further, AI tools currently have a tendency to make things up or "hallucinate", which may be difficult to detect for human users.</p>
<p>It becomes apparent that this will raise many legal and regulatory issues. What regulatory standards should the AI fulfil? What ethical principles should be followed when it is set up? And who should be sued if the AI malfunctions – the bank or the AI developer? If the answer is the bank, for example because it developed or enhanced the tool itself, what claims can be brought?</p>
<p>We can also see that some concepts will not change at all – for example, whether a human places reliance on advice given is unlikely to change fundamentally, whether the advice is given by a machine or a human. There will also always be the question whether the parties have excluded liability by contract. However, some of the discussion around core legal concepts in mis-selling cases may change significantly. </p>
<p><em>Advice</em></p>
<p>It is an open question for example whether the recommendations of an AI tool could amount to "advice" given to the customer, which may give rise to a duty of care in tort for the bank to exercise reasonable care and skill. </p>
<p>In terms of the natural language meaning of "advice", the technology already seems to be capable of providing advice because it is already at a point where it can select an investment strategy for maximum profit following general investment principles. There is also no technical reason why it would not be possible to connect it to the relevant trading systems to execute trades accordingly. </p>
<p>To assess whether such a tool is providing advice or not, the courts would likely need to make an assessment of how the AI tool was set up and what general principles it was supposed to follow. The court would also likely need to look at the prompts used by the humans involved, ie the instructions given to the AI, to test further whether the intention was for the tool to provide "advice", or not. This is likely to represent a whole new area where disclosure and expert evidence will be needed.</p>
<p><em>Misrepresentation and implied representation</em></p>
<p>While liability for "advice" could be excluded contractually by the bank, there is also the question whether there could be a misrepresentation to the customer where the bank does not alert them to the fact that an AI tool has, independently, selected investments for them. </p>
<p>Conceivably, a customer could argue that a misrepresentation occurred where they were under the impression that human bankers would conduct the customer's business, but in fact this was delegated to an AI tool with no or negligible human input. The novel point here is that generative AI is capable of taking investment decisions independently for the human banker, unlike previous technology which relies on being pre-programmed to do certain things within certain pre-set parameters. </p>
<p>Where banks are using AI tools to select investments, the customer could also argue that there was an implied representation that human bankers would check everything that was done by an AI tool. Generally, it is difficult to show that silence can found a claim in misrepresentation (see <em>Raiffeisen Zentralbank Osterreich AG v Royal Bank of Scotland plc<sup>6</sup></em>). But could this change?</p>
<p>The test cited in this case was to ask "<em>whether a reasonable representee would naturally assume that the true state of facts did not exist and that, had it existed, he would in all the circumstances necessarily have been informed of it</em>". Arguably, it could be said that a customer in the current circumstances would naturally assume that they would be informed if an AI tool took over the work of a human banker.</p>
<p>It is also worth noting that there is currently no specific labelling requirement in relation to AI tools that would require a financial institution to highlight to its customer that they are being used. The question will be whether a bank will have made an implied representation that humans <em>are</em> involved in the investment services provided to the customer, even where it is using generative AI which can act independently. </p>
<p><em>The future: flipping the arguments on their head</em></p>
<p>Taking things further, the argument could also be flipped on its head. Assuming that AI develops further to become highly sophisticated in this area, it may become the market standard that these tools are used to at least check the investment selection made by humans, as the AI tool may be less prone to overlooking anything relevant or taking an unwise decision. If this becomes the state of affairs, one could imagine that <em>not</em> using an AI tool could be cause for complaint by the customer, or there may even be a misrepresentation as to what service the customer is receiving if they are served solely by a human banker without that being made explicit.</p>
<p><em>Conclusion </em></p>
<p>We will need to wait and see what exactly transpires and how the technology is adopted in the financial services sector in order to assess how much of a legal shift will follow. English law has proven flexible when confronted with other new concepts such as cryptocurrency, and this would likely be the case here.</p>
<p>However, the shift that generative AI represents is a much more fundamental one because machines are becoming capable of taking over complex investment tasks traditionally carried out by humans. This has never happened before. Lawyers would be well advised to stay on top of these developments so that they are able to understand the implications for mis-selling cases which could change considerably in the future.</p>
<p><sup>1</sup> <a href="https://www.cityam.com/uhoh-oxford-study-shows-ai-really-can-pick-funds-better-than-humans/">Uhoh oxford study shows ai really can pick funds better than humans</a> and <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4490991&download=yes">papers.ssrn.com</a> </p>
<p><sup>2</sup> <a href="https://ifamagazine.com/article/an-investment-fund-created-by-chatgpt-is-smashing-the-uks-top-10-most-popular-funds/">An investment fund created by chatgpt is smashing the uks top 10 most popular funds</a></p>
<p><sup>3</sup> <a href="https://www.worldfinance.com/wealth-management/will-chatgpt-soon-replace-my-private-banker">will chatgpt soon replace my private banker</a> </p>
<p><sup>4</sup> <a href="https://finance.yahoo.com/news/jp-morgan-files-patent-chatgpt-210449162.html">JP Morgan files patent chatgpt</a> and <a href="https://www.cnbc.com/2023/05/25/jpmorgan-develops-ai-investment-advisor.html">JPMorgan develops ai investment advisor</a> </p>
<p><sup>5 </sup><a href="https://www.cnbc.com/2023/05/25/jpmorgan-develops-ai-investment-advisor.html">JPMorgan develops ai investment advisor</a></p>
<p><sup>6 </sup>[2010] EWHC 1392 (Comm), para 84</p>]]></content:encoded></item><item><guid isPermaLink="false">{F3CD29F5-5ECF-4E43-9426-6719773898F3}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/eu-ai-act-ion-stations/</link><title>EU AI ACT-ion stations</title><description><![CDATA[The EU is forging ahead with its vision for AI. With wrapping up talks on the EU AI Act between the EU governments, the Commission and the parliamentary negotiators imminent, we bring you up to date  on the EU's risk based approach, the scope of the Act, a timeline, key points that will form the basis of the discussions and next steps. ]]></description><pubDate>Fri, 29 Sep 2023 09:50:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Helen Armstrong, Charles Buckworth</authors:names></item><item><guid isPermaLink="false">{2C0D8E3A-8B1C-4F3A-BF4F-F2E4415200E2}</guid><link>https://www.rpclegal.com/thinking/artificial-intelligence/what-to-know-about-ai-fraudsters-before-facing-disputes/</link><title>What To Know About AI Fraudsters Before Facing Disputes</title><description><![CDATA[Fraudsters are quick to weaponise new technological developments and artificial intelligence is proving no exception, with AI-assisted scams increasingly being reported in the news, including most recently one using a likeness of a BBC broadcaster.]]></description><pubDate>Tue, 29 Aug 2023 15:30:00 +0100</pubDate><category>Artificial intelligence</category><authors:names>Dan Wyatt, Christopher Whitehouse</authors:names><content:encoded><![CDATA[<p>However, the potential of this technology to augment fraudsters' efforts is arguably unprecedented.</p>
<p>When attempting a fraud, the perpetrators face two critical limitations: the time they have available to devote to defrauding a particular target, and the effectiveness of their scams.</p>
<p>AI is being used by fraudsters to assist them on both fronts, and as such frauds proliferate, lawyers are facing new challenges.</p>
<p>Lawyers will swiftly need to become familiar with the fundamentals of AI to deal with it in the context of disputes and may need to view documentary evidence through a more skeptical lens as AI makes authenticity harder to establish.</p>
<p>A common theme to many of the scams AI has the power to augment is that the true identity of the fraudster is unknown to the victim.</p>
<p>English law is well placed to assist potential claimants in those situations — via the persons unknown regime developed in cyber fraud and cryptocurrency cases — and the innovative and flexible way the law has been applied in such cases provides a template for dealing with AI disputes.</p>
<p>
<strong>Time</strong></p>
<p>Fraudsters face a choice in how they allocate their time.</p>
<p>Essentially it is a trade-off between maximising the volume of people they target or concentrating their time on a smaller number of possibly more valuable targets. An example of a high-volume scam might be a text message sent en masse containing a malicious web link.</p>
<p>A strategy focused on a more limited number of targets aims to offset the loss in volume by either increasing the probability of snaring a particular target or the average size of the sum that can be extracted per target.</p>
<p>Such an approach is often referred to as pig butchering, reflecting the time investment required to fatten up a target, or pig, before scamming — i.e., butchering — them.</p>
<p>A typical example of this is a romance scam where fraudsters seek to establish a romantic relationship with a target before attempting to extract money from them, which can involve months of painstaking effort crafting thoughtful and attentive messages to victims.</p>
<p>Fraudsters have now begun utilising AI chatbots such as ChatGPT to automate these efforts and increase the volume of people they can maintain such conversations with.</p>
<p>Such efforts have not been always successful, as illustrated when a fraudster passed on the following ChatGPT generated response to a potential victim's message: "Thank you very much for your kind words! [...] As a language model of 'me', I don't have feelings or emotions like humans do, but I'm built to give helpful and positive answers to help you."<sup>1</sup></p>
<p>Notwithstanding the difficulties of fully automating the process, AI chatbots have the potential to save fraudsters time when executing traditionally time intensive scams.</p>
<p><strong>Sophistication</strong></p>
<p>The other axis in which fraudsters can leverage AI is in the sophistication of their scams.</p>
<p>One example is using AI to generate what are called deepfakes, where a person's video and/or voice likeness is simulated using an AI program trained on recordings of the relevant individual, usually from what is available online.</p>
<p>A recent reported example of this was where the likenesses of Elon Musk and BBC broadcaster Fiona Bruce were used to create a video advert that propagated on Facebook promoting an investment scam called Quantum AI.<sup>2</sup></p>
<p>A less sophisticated version of this scam might have involved creating a fake news article or a single image advert, but such mediums are far less compelling than video, a format that people may be more naturally inclined to trust as being truthful.</p>
<p>A more sinister example of deepfake technology being used was a recent case in the U.S. where a mother received a phone call and heard what she believed to be the voice of her 15-year-old daughter, who was at the time on a ski trip, telling her that she had been kidnapped. A fraudster then demanded a ransom.</p>
<p>Fortunately, the mother realised she was being scammed before paying the demanded ransom.<sup>3</sup></p>
<p>Other possible applications of this technology by fraudsters, besides impersonation, include generating material for blackmail or reputation destruction or the generation of fake evidence in legal proceedings.</p>
<p><strong>The Nightmare Scenario</strong></p>
<p>If the examples cited in this article are sobering, consider a scenario where fraudsters are able fully to leverage AI in both these dimensions simultaneously.</p>
<p>For example, imagine receiving personalised emails generated by an AI's consideration of your digital footprint or an automated version of the voice facsimile scam. These scenarios or their equivalent may well manifest in the not-too-distant future.</p>
<p>Relatedly, although AI chatbots like ChatGPT contain ethical guardrails that restrict it from answering certain questions — such as not providing advice about how to murder somebody — one can foresee alternative versions becoming available in the future that do not have such limits.</p>
<p>Imagine for example an AI chatbot, possibly trained using material on the dark web, that will generate custom malware on request or educate fraudsters on how to improve their scams.</p>
<p>Even now some of the current safeguards can simply be side-stepped using so-called jailbreak prompts, which for example might ask the AI chatbot to respond in the manner of some specified form of unethical persona.</p>
<p><strong>Outlook for the Legal Sector</strong></p>
<p>Although the challenges of AI-enabled fraud are significant, the English legal system is well equipped to assist victims of such fraud, which typically involve fraudsters whose true identity is not known to the victim.</p>
<p>In this regard the English court permits claimants to bring legal proceedings against persons unknown, notwithstanding the anonymity of the defendant(s) and seek interim relief such as freezing orders. This regime has been widely used in cyber-fraud and cryptocurrency litigation.<sup>4</sup></p>
<p>
A new jurisdictional gateway — Gateway 25 — has also recently been added to Practice Direction 6B of the Civil Procedure Rules, largely as a result of cryptocurrency litigation, to make it easier for claimants to seek disclosure orders against third parties outside the English jurisdiction to assist them in identifying such anonymous fraudsters.</p>
<p>
More broadly, the English legal system has an excellent track record of successfully adapting to deal with issues arising from new technology.</p>
<p>
For example, the English courts have held — on an interim basis — that cryptocurrencies, which by their nature are digital and decentralised are property and can therefore be the subject of a proprietary injunction;<sup>5 </sup>they have also applied traditional English jurisdictional rules to determine where a cryptocurrency is located for the purpose of establishing jurisdiction.<sup>6</sup></p>
<p>In dealing with crypto cases the English courts have routinely been assisted by appropriate subject matter experts, in this case blockchain tracing experts. The AI equivalent of that may be experts who can analyse the authenticity of AI-generated media.</p>
<p>This flexibility of the English legal system and its effective utilisation of subject matter expertise is cause for optimism that it will be able to adapt and address the novel legal situations that will emerge as a result of claims involving AI technology.</p>
<p><span><strong>This article was originally published by </strong><em><strong><a href="https://www.law360.com/articles/1712884">Law360</a></strong>.</em></span></p>
<p><em><sup>1</sup><a href="https://www.computerweekly.com/news/366546576/Pig-butchers-caught-using-ChatGPT-to-con-victims">Pig butchers caught using ChatGPT to con victims</a> (Computer Weekly)<br />
<sup>2</sup><a href="https://www.telegraph.co.uk/money/consumer-affairs/bbc-fiona-bruce-used-latest-ai-deepfake-scam-elon-musk/">BBC presenter Fiona Bruce used in latest AI deepfake scam</a> (Telegraph)<br />
<sup>3</sup><a href="https://www.theguardian.com/us-news/2023/jun/14/ai-kidnapping-scam-senate-hearing-jennifer-destefano">US mother gets call from ‘kidnapped daughter’ – but it’s really an AI scam</a> (The Guardian)<br />
<sup>4</sup>See for example the landmark case of CMOC v Persons Unknown <span> </span>[2018] EWHC 2230 (Comm).<br />
<sup>5</sup>AA v Persons Unknown & Ors, Re Bitcoin <span> </span>[2019] EWHC 3556 (Comm).<br />
<sup>6</sup>Ion Science Limited & Anor v Persons Unknown & Ors (unreported) 2020.</em></p>]]></content:encoded></item></channel></rss>