09/17/2025 | News release | Distributed by Public on 09/16/2025 18:24
Exploring the impact of AI was a key focus at two global conferences held earlier in 2025: the IAPP Global Privacy Summit in Washington DC, and the RSAC Conference on Cyber Security in San Francisco. On behalf of Norton Rose Fulbright Australia, I recently attended two global conferences in the United States of America: the IAPP Global Privacy Summit held in Washington DC, and RSAC Conference on Cyber Security held in San Francisco.
These conferences were attended by over 40,000 people from 120 countries and provided the opportunity to hear directly from experts, lawmakers, regulators, academics and the world's largest technology vendors and cyber security service providers. They spoke of their observations and concerns about privacy and cyber security across the globe.
There were some common themes cutting across both conferences and important takeaways for organisations adapting to changing technologies and 'catch-up' regulation. Unsurprisingly, artificial intelligence (AI) in all its forms, from generative to agentic, was a major theme at both conferences.
This update summarises the key themes from each conference with brief commentary on some of the more high-profile presentations.
At the IAPP Global Summit, there was broad acknowledgement that cross-regulator co-ordination in the management of privacy and AI, at both local and global levels, is needed and is increasing.
Regulators from the US, UK and EU demonstrated further alignment around the need for collaboration that is both cross-sector and cross-jurisdictional on the regulation of privacy and AI. This need is becoming more urgent as AI becomes more prevalent and embedded in all aspects of our lives and continues to be borderless.
At Norton Rose Fulbright, we have observed that disjointed regulatory regimes are particularly problematic for clients when supply chains for products and services are international.
In the age of connected devices, anywhere-on-line-shopping, cloud services and AI, supply chains are almost inevitably cross-border and data sharing can be essential for the products or services to work or to be supplied. Greater alignment across regulatory regimes will assist in privacy protection and support consumer confidence in the efficiencies and quality that international supply chains can bring.
Comprised of state Attorney Generals from California, Colorado, Connecticut, Delaware, Indiana, New Jersey and Oregon, this consortium of US privacy regulators was created to formalise coordination, information and resource-sharing in investigations and enforcement efforts.
While the fragmentation of regulations across the United States is likely to continue (despite the Trump Administration's commitment to de-regulation), regulators in the United States are increasingly collaborating on shared priorities to offer a more robust approach to enforcement.
On 12 July 2024, the EU published the world's first binding and comprehensive AI-specific law in its Regulation (EU) 2024/1689 (AI Act).
Unlike the EU's privacy legislation (the General Data Protection Regulation), the EU's AI Act was never intended to be the 'gold standard'. It is instead intended to have minimal application and is based on EU product safety laws. Indeed, most deployers of AI will not be covered. However, there is overlap between 'providers' of AI systems using general purpose AI-models, and 'deployers' using AI systems in business - this will be an area of uncertainty and contest when it comes to enforcement.
The definition for 'AI system' under the AI Act has been adopted from the (arguably now outdated) OECD definition. It is broad and omits reference to Generative AI. Under Article 3, an 'AI system' means:
'A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.'
However, the AI Act uses a risk-based approach to AI regulation and makes a distinction in applicability based on the system's potential to result in an occurrence of harm and the severity of that harm. Each risk-based category of AI systems is subject to varying requirements. For instance, 'high risk' AI systems are subject to stricter rules due to their potential to harm health, safety, fundamental rights, democracy and rule of law. Given the hefty fines associated with non-compliance, organisations will need to be clear in their understanding of their obligations under the AI Act. This will require an assessment of, amongst other things, whether their tools would be considered an AI system, the category of risk associated with their systems, and applicability of other laws intertwined with the AI Act.
The artefacts required to demonstrate compliance are undefined, thereby making conformity assessments a further area of uncertainty. The 'blackbox' problem and ability to explain how the AI works, is an area of significant concern.
The requirement in Article 4 for staff of both providers and deployers to have 'AI literacy' is a further area of uncertainty and it is unclear what standard will be or perhaps should be applied to evidence sufficient literacy.
The AI regulatory landscape is continuing to expand as we observe the developing approach of regulators, see the remaining parts of the EU's AI Act come into effect, and anticipate the establishment of harmonised standards. This development will be a key area for nations to be considerate of as they begin to roll out their own legislation.
Do we need to establish a 'proof of human test' to protect human-machine interactions but not machine-machine interactions?
ChatGPT creator, Sam Altman, spoke at the IAPP Global Summit about the need to identify when humans are interacting with bots, in order to preserve the dignity of human-AI interactions.
Further, there appears to be a growing trend, especially among young people, of using generative AI chatbots to have deeply personal conversations akin to those which they may have with therapists, doctors or lawyers. In response to this, Altman revealed his thinking about a new form of confidentiality or privacy that protects identity when those interactions are more intimate - like the sorts of interactions that are protected by equitable duties of confidentiality in certain human-to-human communications and by fiduciary duties between doctor and patient, lawyer and client.1
He suggested a concept like 'AI privilege' might be appropriate and did not seem to think 'privacy' was compatible, as 'digital de-identification of humans' could be enough. The absence of such privilege was considered to be a barrier in user adoption.
However, when a doctor or lawyer breaches their duties, there are consequences. Altman did not suggest a remedy should be available to humans when AI breaches our confidence or privacy. If such a relationship is recognised at law, it seems appropriate for it to be user-centric. Legal development with users in mind may build greater public trust in rapidly advancing technology, and additionally reduce users' tendency to self-censor, ultimately benefitting both the user and the training of the AI model.
Regardless of a decision to recognise 'AI privilege' or a concept of 'AI communication confidence' in the future, a uniform position and framework adopted across the globe will avoid the creation and potential abuse of 'privilege havens'. In the meantime, it continues to be best practice to avoid inputting anything into an AI chatbot that you wouldn't want out in the open.
Another key theme at the IAPP Global Summit was the almost symbiotic relationship between AI and cybersecurity. There was broad agreement that AI is both a powerful defence tool and a potential weapon for attackers. There was also consensus that if AI is not used for defence, it will succeed as a weapon. Many cyber security providers at the Summit spoke of AI's ability to enhance real-time threat detection and prioritisation and make predictions, but it is also being used to automate cyber-attacks, phishing and impersonation through deepfakes.
In contrast to generative AI applications like ChatGPT, which requires human input to generate content, agentic AI is a separate class of AI and operates autonomously.
Efficiencies are the promise of agentic AI but it comes at a cost as there is lack of transparency and potential for misuse and interception.2
With limited supervision, there is greater need for organisations building their agentic AI infrastructure to ensure that stages of the agentic AI's 'thought process' is accurately recorded and that there is human review at the final stage at the very least. This is especially important where the ability of corporations to shift responsibility to agents acting outside the scope of their authority may not as easily apply to non-human agents.
This was a theme conveyed by both the US Secretary of Homeland Security, Kristi Noem on the Cybersecurity and Infrastructure Security Agency (CISA) and the UK's Head of GCHQ, Anne Keast-Butler on the Five Eyes.
In separate addresses, Noem and Keast-Butler spoke of collaboration against nation-state attacks including successful disruption of Russian botnets facilitating various crimes via spearfishing and credential harvesting against entities of interest from government departments to military, corporate and security sectors.
Keast-Butler spoke of 'fighting fire with fire' and using 'Moobot malware' on routers by using publicly known default administrator passwords to allow hackers to then install their own scripts and files to repurpose the botnet and create a global cyber espionage platform.
Global efforts towards cybersecurity are exemplified in the recent AI data security guidance jointly issued by cyber regulators of Australia, New Zealand, Canada, the United Kingdom, and the United States.3 This move to collaborate at the global level should serve as a reminder to private organisations to prioritise their protective cybersecurity measures and guard themselves against weaponised AI.
Many representatives from the US technology titans spoke at RSAC and a common theme was the push for global harmonisation of regulation or no regulation, in order to win the AI arms race.
One commentator said 'If other countries are going to juice their economies with US tech, then they take what we build or use something else. They cannot require us to change our tech to comply with their laws.'
More recently, US tech was dealt a significant set-back in regard to de-regulation with Senators voting to remove a 10-year moratorium on state regulation of artificial intelligence from President Trump's 'One Big Beautiful Bill'.4
Harmonisation therefore becomes necessary not only to avoid the exploitation of any 'AI law havens' which may be created as a result of differing positions, but also for market-level efficiency. In whichever way policymakers and regulators choose to approach AI legislation and enforcement, what remains clear is the need for collaboration between nations, strict transparency requirements and a promise to protect users.