01/10/2025 | News release | Distributed by Public on 01/10/2025 06:02
Generative AI models are improving at an unprecedented pace, able to generate text, images, videos and other outputs that are becoming increasingly indistinguishable from human-generated works. However, in the UK, several critical legal issues, such as the subsistence and ownership of IP rights in AI-generated outputs and the potential infringement of IP rights arising from the training and use of AI models, remain unresolved.
Below, we review the key developments of 2024 and consider potential shifts in the legal landscape in 2025.
Despite the rapid advance of AI, key questions remain around whether the use of IP-protected materials in each of the stages of generative AI development and use, such as training, input and output, amounts to infringement under UK law (and, if so, who is responsible for such infringement).
The UK Copyright, Designs and Patents Act 1988 (CDPA) does not directly address the use of copyright-protected materials in AI training. Although the CDPA does provide for specific exceptions to infringement (the UK has not adopted a broad fair dealing defence), these are narrow in scope (e.g. for research and parody), and the express exception for text and data mining (TDM) is limited to non-commercial research and private study (the UK has not adopted a broader fair dealing defence). To what extent these exceptions may be available (if at all) is presently unclear.
A fundamental tension exists between those who wish to access large datasets for AI training and those who create the content. AI companies argue that AI development would be impossible without the ability to scrape data for training purposes. In contrast, for the creative sector, AI represents a potential existential threat. In a recent op-ed for Fortune, Getty's CEO, Craig Peters, in outlining the company's reasons behind pursuing IP infringement actions in the UK and the US, noted copyright is the core of Getty's business, and referred to the scraping of its content as "pure theft from one group for the financial benefit of another".1 A recent global study looking at the impact of emerging technologies on human creativity on behalf of the International Confederation of Societies of Authors and Composers (CISAC) suggests that workers in the music and audiovisual sector will lose a quarter of their income to AI over the next four years.2
Striking a balance between rights holders and AI innovators has proven difficult for legislators. The previous Conservative government's pro-AI innovation proposals to introduce a voluntary code to enable the greater use of data to train AI models were abandoned after being met with unsurprising backlash from the creative sector. The new Labour government elected in 2024 has pledged to bring forward legislation tackling AI in 2025, but has provided little detail on how it proposes to foster innovation in AI whilst protecting the creative industry.
In the absence of legislation, it will be left to the courts to shape the landscape. The most prominent case currently making its way through the High Court (after an unsuccessful reverse summary judgment by Stability AI) is Getty Images and Stability AI, where Getty claims Stability AI scraped Getty's library of images and used them to train its text-to-image generative AI model, Stable Diffusion. The matter is listed for hearing in the High Court in the summer of 2025 and is likely to be heard before a parallel action in the US. A decision is unlikely to be delivered before 2026 and may well be subject to appeal. While 2025 may bring new legislation and some judicial thinking on the issues of infringement and liability for infringement, ongoing debate and litigation on these issues are expected to persist for some time.
This reality has left many organisations with a choice: licence content to AI developers or litigate to try to stop its use. Against this backdrop, 2024 witnessed the emergence of ethical AI systems rooted in principles of transparency and fairness, with a willingness to compensate content producers for the use of content. A number of media organisations have capitalised on this shift by forging new revenue streams through content licensing agreements with such companies. During the last quarter of 2024, Sky UK, the Guardian and DMG Media each announced partnerships with the start-up Prorata, and Universal Music announced a deal with Klay Vision on a groundbreaking model for AI-generated music, ensuring that the interests of both the industry and its creators are respected. HarperCollins has also embraced this approach, allowing select titles to train AI models with the explicit consent of authors. These developments may mark a crucial step towards aligning AI's rapid growth with the principles of fairness and respect for intellectual property, highlighting the potential for collaboration between technology and the creative industries in shaping a more sustainable future for both.
Equally pressing are questions surrounding IP rights in materials created by AI and the ownership of such rights.
The UK is one of the few countries in the world to specifically provide for copyright protection for computer-generated works. The CDPA provides that computer-generated works without a human author are taken to be owned by the person who made "the arrangements necessary for the creation of the work to be undertaken". However, the relationship between this provision and the broader requirement of originality under UK law, which typically requires the work to be the intellectual creation of a human author, remains unclear. This has led to ongoing uncertainty regarding the existence of copyright, the scope of protection and the ownership of rights in computer-generated works. Depending on whose intellectual effort was involved in making the arrangement, the work may be owned by the author of the prompt, the creators of the AI system, or both. As reported in our earlier article, the Government is seeking views on potential amendments to the CPDA with respect to computer generated works to address this uncertainty.
Whilst the position remains unclear, businesses can mitigate the risks to some extent by clarifying ownership in contracts (for example, the terms and conditions displayed on Open AI's website state that the user will own all output generated, but places restrictions on the use of such output).
The position on the patentability of AI-generated inventions is clearer. Following the Supreme Court's decision in Thaler, the position in the UK is aligned with that in number of other countries (including the US, Australia and the EPO) and a human inventor is required for patentability. However, what level of human input is required for AI assisted inventions to qualify for patent protection remains unclear.
It is hoped that legislative reform will bring some clarity to these issues.