UK Government and Ofcom Target X Over Non-Consensual Image Abuse
Published 12 January 2026
Highlights
- The UK government is set to enforce a law criminalizing the creation of non-consensual intimate images, targeting platforms like X.
- Ofcom has launched an investigation into X over the use of Grok AI to manipulate images, with potential fines up to 10% of global revenue.
- Technology Secretary Liz Kendall emphasized the urgency of the investigation and the government's commitment to swift action.
- The Online Safety Act mandates platforms to prevent and remove illegal content, with severe penalties for non-compliance.
- Campaigners have criticized the government for delays in enforcing the new legislation, initially passed in June 2025.
-
Rewritten Article
UK Government and Ofcom Target X Over Non-Consensual Image Abuse
The UK government is poised to enforce new legislation aimed at curbing the creation of non-consensual intimate images, following concerns over Elon Musk's X platform and its Grok AI tool. This move comes amid growing public and political pressure to address the misuse of AI technology in generating sexualized images without consent.
Government Action and Legal Framework
Speaking to Labour MPs, Sir Keir Starmer warned that X could lose its right to self-regulate if it fails to control Grok AI. The government plans to swiftly implement laws making it illegal to create and distribute such images, with Technology Secretary Liz Kendall stating that these images are "weapons of abuse" and not merely "harmless images."
The Online Safety Act, which requires platforms to prevent and remove illegal content, will be enforced with new measures to criminalize the creation of deepfakes. Kendall emphasized that sharing intimate images without consent is a criminal offense under this act, and platforms like X must be held accountable.
Ofcom's Investigation into X
The UK media regulator, Ofcom, has initiated a formal investigation into X, focusing on the use of Grok AI to alter images of women and children. Ofcom's inquiry, described as a "matter of the highest priority," could lead to significant penalties, including fines up to 10% of X's worldwide revenue or a UK ban for severe breaches.
Ofcom has already contacted X regarding its compliance with the Online Safety Act. The regulator is determined to ensure that its investigation is thorough and legally robust, with the backing of the UK government to use its full powers if necessary.
Public and Political Pressure
Campaigners have criticized the government for delays in enforcing the new legislation, which was passed in June 2025 but not yet implemented. Kendall assured the Commons that the law would be enforced this week, making it a priority offense under the Online Safety Act.
The urgency of the situation is underscored by the potential harm to children and the public outcry over the proliferation of illegal content on platforms like X. The government and Ofcom are committed to taking decisive action to protect individuals from such abuses.
-
Scenario Analysis
The enforcement of new legislation and Ofcom's investigation into X could set a precedent for how AI-generated content is regulated in the UK. If X fails to comply, it may face substantial fines or even a ban, which could impact its operations significantly. This situation highlights the growing need for robust legal frameworks to address the challenges posed by AI technologies. Experts suggest that this case could influence future regulations globally, as governments grapple with the ethical and legal implications of AI in digital content creation.
The UK government is poised to enforce new legislation aimed at curbing the creation of non-consensual intimate images, following concerns over Elon Musk's X platform and its Grok AI tool. This move comes amid growing public and political pressure to address the misuse of AI technology in generating sexualized images without consent.
Government Action and Legal Framework
Speaking to Labour MPs, Sir Keir Starmer warned that X could lose its right to self-regulate if it fails to control Grok AI. The government plans to swiftly implement laws making it illegal to create and distribute such images, with Technology Secretary Liz Kendall stating that these images are "weapons of abuse" and not merely "harmless images."
The Online Safety Act, which requires platforms to prevent and remove illegal content, will be enforced with new measures to criminalize the creation of deepfakes. Kendall emphasized that sharing intimate images without consent is a criminal offense under this act, and platforms like X must be held accountable.
Ofcom's Investigation into X
The UK media regulator, Ofcom, has initiated a formal investigation into X, focusing on the use of Grok AI to alter images of women and children. Ofcom's inquiry, described as a "matter of the highest priority," could lead to significant penalties, including fines up to 10% of X's worldwide revenue or a UK ban for severe breaches.
Ofcom has already contacted X regarding its compliance with the Online Safety Act. The regulator is determined to ensure that its investigation is thorough and legally robust, with the backing of the UK government to use its full powers if necessary.
Public and Political Pressure
Campaigners have criticized the government for delays in enforcing the new legislation, which was passed in June 2025 but not yet implemented. Kendall assured the Commons that the law would be enforced this week, making it a priority offense under the Online Safety Act.
The urgency of the situation is underscored by the potential harm to children and the public outcry over the proliferation of illegal content on platforms like X. The government and Ofcom are committed to taking decisive action to protect individuals from such abuses.
What this might mean
The enforcement of new legislation and Ofcom's investigation into X could set a precedent for how AI-generated content is regulated in the UK. If X fails to comply, it may face substantial fines or even a ban, which could impact its operations significantly. This situation highlights the growing need for robust legal frameworks to address the challenges posed by AI technologies. Experts suggest that this case could influence future regulations globally, as governments grapple with the ethical and legal implications of AI in digital content creation.








