UK regulators are scrambling to contain a deepfake disaster unfolding on Elon Musk’s X platform. Ofcom, the UK’s independent regulator for communications services, made “urgent contact” with xAI this week after reports that Grok, the platform’s AI chatbot, has been generating explicit images of women and children without consent.
What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/
Key Takeaways
- UK Ofcom made urgent contact with xAI over Grok deepfake nudes of women.
- 2026 is shaping up as the year of “breaking verification” for newsrooms.
- AI tools without consent guardrails create immediate legal/editorial headaches.
The tool reportedly put images of Princess Catherine, celebrities and ordinary women into sexualized contexts. Users discovered they could digitally undress anyone by simply tagging Grok in a post.
Journalist Samantha Smith told the BBC she felt “dehumanised and reduced into a sexual stereotype” after discovering Grok users had targeted her photos. When she posted about the experience, others asked Grok to generate more.
Technology Secretary Liz Kendall called the situation “absolutely appalling” and called on xAI to to “urgently deal” with its chatbot. She has also backed Ofcom to take enforcement action. The European Commission labeled the outputs “illegal” and “disgusting.”
xAI’s response to journalists has been an auto-reply. Many media organizations seeking comment from the company reported receiving auto replies that said only, “Legacy Media Lies.” But under pressure from regulators in the UK, EU, India, France and Malaysia, the company has issued reactive statements. X’s Safety account said it removes illegal content and works with law enforcement. Musk posted that users who prompt illegal content will face consequences. The Washington Post reported he also responded to one complaint with a laughing emoji.
Call it “breaking verification” instead of breaking news.
This crisis illustrates why the Reuters Institute’s 2026 predictions matter for newsrooms. Harvard Shorenstein Fellow Shuwei Fang told the Institute that news organizations will discover their next product isn’t content but process: answering “Is this real?” at speed.
When fake images spread instantly and AI tools generate convincing forgeries on command, audiences need trusted sources who can quickly establish what’s authentic. News organizations with verification expertise have a product the market desperately needs.
Law professor Clare McGlynn of Durham University told the BBC that X “could prevent these forms of abuse if they wanted to” but “appear to enjoy impunity.”
In a landscape where platform owners dismiss press inquiries as lies, journalism’s verification function becomes essential infrastructure.







