OFCOM - Office of Communications

02/03/2026 | Press release | Distributed by Public on 02/03/2026 06:18

Investigation into X, and scope of the Online Safety Act

Ofcom has today set out the next steps in its investigation into X, and the limitations of the UK's Online Safety Act in relation to AI chatbots.

Our investigation into X

Ofcom was one of the first regulators in the world to act on concerning reports of the Grok AI chatbot account on X being used to create and share demeaning sexual deepfakes of real people, including children, which may amount to criminal offences.[1]

After contacting X on 5 January, giving it a chance to explain how these images had been shared at such scale, we moved quickly to launch a formal investigation on 12 January into whether the company had done enough to assess and mitigate the risk of this imagery spreading on its social media platform, and to take it down quickly where it was identified.

Since then, X has said it has implemented measures to try and address the issue. We have been in close contact with the Information Commissioner's Office, which is launching its own investigation today.[2] Other jurisdictions have also launched investigations in the weeks since we opened ours, including the European Commission on 26 January.

Our investigation remains ongoing and we continue to work closely with the ICO and others to ensure tech firms keep users safe and protect their privacy.

Not all AI chatbots are regulated

Broadly, the Online Safety Act regulates user-to-user services, search services and services that publish pornographic content.

Chatbots are not subject to regulation at all if they:

  • only allow people to interact with the chatbot itself and no other users (i.e. they are not user-to-user services);
  • do not search multiple websites or databases when giving responses to users (i.e. are not search services); and
  • cannot generate pornographic content.

Whether they fall within one or more of these categories depends on how they work, and even where some AI chatbots are in scope of the Act, this does not mean that all content they generate will be regulated. Images and videos that are created by a chatbot without it searching the internet are not generally in scope. They will only be in scope if they are pornographic - in which case they need to be age-gated - or can be shared with other users of the chatbot.[3]

We can only take action on online harms covered by the Act, using the powers we have been granted. Any changes to these powers would be a matter for Government and Parliament. The Secretary of State has said that the Government will look at how chatbots should be regulated and we are supporting that work.[4]

We are not investigating xAI at this time

When we opened our investigation into X, we said we were assessing whether we should also investigate xAI, as the provider of the standalone Grok service. We continue to demand answers from xAI about the risks it poses. We are examining whether to launch an investigation into its compliance with the rules requiring services that publish pornographic material to use highly effective age checks to prevent children from accessing that content.

Because of the way the Act relates to chatbots, as explained above, we are currently unable to investigate the creation of illegal images by the standalone Grok service in this case.

Where we are in our X investigation

In our investigation into X, we are currently gathering and analysing evidence to determine whether X has broken the law, including using our formal information-gathering powers. The week after we launched our investigation, we sent legally binding information requests to X, to make sure we have the information we need from the company, and further requests continue to be sent.

Firms are required, by law, to respond to all such requests from Ofcom in an accurate, complete and timely way, and they can expect to face fines if they fail to do so.

We must give any company we investigate a full opportunity to make representations on our case. If, based on the evidence, we consider that the company has failed to comply with its legal duties, we will issue a provisional decision setting out our views and the evidence upon which we are relying. The company will then have an opportunity to respond to our findings in full, as required by the Act, before we make our final decision.

Next steps

We know there is significant public interest in our investigation into X. We are progressing the investigation as a matter of urgency. We will provide updates and will be as open as possible during this process. It is important to note that enforcement investigations such as these take time - typically months.

We must follow strict rules about how and when we can share information publicly, as is the case for any enforcement agency, and it would not be appropriate to provide a running commentary about the substantive details of a live investigation. Running a fair process is essential to ensuring that any final decisions are robust, effective, and that they stick.[5]

While in the most serious cases of ongoing non-compliance we can apply for a court order requiring broadband providers to block access to a site in the UK, the law sets a high bar for such applications, and a specific process must be followed before we can do this. It would be a significant regulatory intervention and is not one we are likely to make routinely, given the impact it could have on freedom of expression in the UK.

What to do if you are worried about something online

If you come across something online that you think might be harmful or illegal, you should report it to the platform. If you think it may be child sexual abuse material, report it anonymously to the Internet Watch Foundation.

If you are worried about intimate images of yourself or someone you know:

  • For under 18s: If you're under 18, you can report images of yourselves or peers through Report/Remove.
  • For over 18s: The Revenge Porn Helpline offers support and resources if your intimate image has been shared without your consent.

More information on how to get help and report harmful content is available here.

Notes to editors:

  1. It is against the law for anyone to be in possession of images of children under 18 that are intimate or depict sexual activity, whether they have been created artificially or not. It is also against the law to share, or threaten to share, an intimate image of someone of any age without their consent, including AI-generated images. From 6 February, it will also be unlawful to create, or request the creation of, such images.
  2. [The ICO's investigation into X and xAI will cover both the development and deployment of Grok, and look into whether personal data has been processed lawfully, fairly and transparently, and whether appropriate safeguards were implemented to prevent the generation of harmful manipulated images using personal data.]
  3. AI chatbots and online regulation - what you need to know. AI-generated content is only covered by the illegal content and children's safety duties in Part 3 of the Act if it is 'user-generated' (shared by users with each other) or 'search content' (encountered in or via search results). Generation of other chatbot content - such as in a one-to-one interaction between a user and a chatbot that does not involve searching the internet or sharing with other users - is not regulated under Part 3, but could be under Part 5 if it is pornographic.
  4. House of Commons Science, Innovation and Technology Committee - Oral evidence: Work of the Secretary of State for the Department for Science, Innovation and Technology; 3 December 2025.
  5. Our Online Safety Enforcement Guidance sets out the process in full.
OFCOM - Office of Communications published this content on February 03, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on February 03, 2026 at 12:18 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]