01/10/2025 | News release | Distributed by Public on 01/10/2025 10:38
The global AI regulation landscape is fragmented and rapidly evolving. Earlier optimism that global policymakers would enhance cooperation and interoperability within the regulatory landscape now seems distant. Instead, we continue to see the policy process to regulate AI progress throughout the world at different stages and adopting different models, from policy statements to soft law, to tabled or adopted legislation.
However, through our support of global businesses, we see the beginnings of a common global direction emerging on how to minimize the risks of AI use and create the structures to address the core principles of safe and ethical AI development and use that are becoming the cornerstones of global AI regulations. In order to develop these AI governance structures, businesses need to anticipate evolving legal requirements and regulatory approaches.
Driven by this increasing cohesion, new governance models and strategies for AI have emerged in both the public and private sectors, offering valuable frameworks for other organizations to follow. For example, the European Commission's AI governance initiatives offer models from which companies can draw inspiration to avoid reinventing the wheel. Leading global technology companies increasingly provide a benchmark in their publicly available standards and principles. Globally, while there is a convergence around fundamental ethical principles and values, there remains a need to be cognizant of regional approaches to AI regulation and adopting organizations' own framework accordingly. Understanding these diverse strategies is crucial for companies operating in multiple jurisdictions.
Africa - Contributors: Shahid Sulaiman, Davin Olen
Efforts to regulate AI are emerging across Africa. Leaders across the continent include Mauritius, which has released an AI strategy, along with Kenya and Nigeria, which are both consulting with stakeholders to develop national AI strategies. In South Africa, stakeholder engagement has increased since the release of a draft AI policy framework for discussion. Further, South Africa's Patent Office has recently registered an AI as a patent inventor, contrasting with rejections of the same application elsewhere. This decision is based on the formative process for patent registrations in South Africa and provides an important incentive for AI development in the region.
Asia-Pacific - Contributors: Michael Park, Matt Hennessy
In September 2024, the Australian government released a Voluntary AI Safety Standard comprising a number of AI guardrails to create best practice guidance for the use of AI. The government also proposed mandatory guardrails for AI in high-risk settings, which were subject to public consultation. It is possible Australia could enact legislation drawing upon some of the concepts in the EU AI Act, but it currently remains unclear how the government will proceed. In May 2024, the Singapore government introduced the Model AI Governance Framework for Generative AI, which details best practice guidance on responsible development, deployment and use of AI. China's Interim Measures for the Management of Generative AI Services commenced in 2023 and should continue to be observed as the region's first comprehensive binding regulation on generative AI.
Canada - Contributor: Chantal Bernier
Canada's direction emerges from the proposed Artificial Intelligence and Data Act (AIDA) and the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. With an election looming, AIDA has an uncertain future. The Voluntary Code commits the signatories to Accountability, Safety, Fairness and Equity, Transparency, Human Oversight and Monitoring, and Validity and Robustness.
European Union - Contributors: Giangiacomo Olivi, Chiara Bocchi
Europe's regulatory strategy reflects its commitment to safeguarding fundamental rights, promoting trust in AI and shaping a global regulatory standard.
The European Union is indeed at the forefront of global efforts to regulate artificial intelligence, with its landmark AI Act. The AI Act has been welcomed as the world's first comprehensive AI-specific legal framework, providing a legal definition of "AI System", and categorizing AI systems based on their potential risk for individuals and fundamental rights, focusing on the use of technology, rather than the technology per se.
Complementing the AI Act, the EU is advancing additional measures to address legal and liability challenges associated with AI. The proposed AI Liability Directive seeks to modernize non-contractual civil liability rules, ensuring they are equipped to handle the unique complexities of AI systems. Furthermore, the recent Revised Product Liability Directive extends liability to encompass software, AI systems and digital services that influence product performance - such as navigation tools in autonomous vehicles - bridging critical gaps in consumer protection.
Driven by this immediate and comprehensive legislation, new AI governance models are being deployed throughout the EU and will likely gain further traction from the newly established EU AI Office, fostering the promotion of the EU approach also beyond its borders. Many businesses - but not all - are turning to governance models designed for the EU AI Act as their benchmark model for managing compliance with developing global regulation.
Latin America - Contributor: Juanita Acosta
In Latin America, most countries only have soft law or equivalent instruments regarding the use of AI, except for Peru, which has implemented a regulation focused on principles and the promotion of AI usage.
Further details may be expected shortly, as several countries, such as Chile, Colombia, Brazil, Mexico, Panama, Peru and Costa Rica, are submitting bills and legal initiatives to regulate AI, particularly to protect personal data and intellectual property.
Latin America will continue to be a key region to watch in 2025.
United Kingdom - Contributor: Simon Elliott
AI regulation in the United Kingdom finds itself in a challenging position.
Based on a clear vision in the UK National AI Strategy to continue as a global leader in supporting the development and adoption of AI (and aiming to unlock the economic benefits for in the digital economy and productivity), to date the UK has focused on a 'pro-innovation', light-touch approach centered on responsibility being placed on sectoral regulators to develop appropriate guidance and codes of practice and avoiding AI-specific legislation. "Guardrails" had previously been the watch word.
The UK has also seen its opportunity to be a balance or bridge between the safety-focused approach of the EU and the less regulated approach of the US.
However, there is a focus on the need to acknowledge an increasing consensus of the potential harms and risks that can arise from insufficiently regulated AI and to legislate accordingly. The direction of travel appears to be an intention to do so, in a proportionate manner.
Details of a proposed legislative approach focusing specifically on the "most powerful" AI models are expected to be published for consultation shortly. Proposed legislation is likely to also involve codifying requirements for leading AI labs to make models available for testing.
This supports another key aspect of the UK's contribution to the global development and regulation of AI, positioning the UK AI Safety Institute as the global leader in undertaking and coordinating global research on the most important risks that AI presents to society to enable the best-informed policy decisions to be made. This will likely continue to be a key focus, particularly considering an anticipated scaling back of its US counterpart.
United States - Contributors: Peter Stockburger, Todd Daubert
The Trump administration likely will reduce regulation, minimize international cooperation and eliminate current Executive Orders with the goal of fostering innovation and US competitiveness. Plans may involve appointing an "AI czar" to coordinate federal efforts, focusing on infrastructure development like data centers and semiconductor manufacturing. This deregulatory approach may be resisted by skeptics, including key advisors. States will likely continue adopting sector-specific AI regulations to address concerns about safety and ethics, and courts will likely address key issues in pending cases. A fragmented and patchwork landscape will likely need to be navigated in the near-term.
Download the full global AI trends report or navigate to our key areas listed below. Visit Dentons' AI: Global Solutions Hub for the latest legal insights, webinar recordings and regulatory overviews from around the world.