London (Parliament Politics Magazine) January 12, 2026 – The UK’s communications regulator Ofcom opened a formal investigation into Elon Musk’s X platform concerning sexual deepfake images generated by the Grok AI chatbot.
The probe follows multiple complaints about non-consensual explicit content featuring public figures created through Grok’s image generation capabilities. Ofcom cited potential breaches of online safety regulations introduced under the Online Safety Act 2023.
X removed thousands of reported deepfake images within 48 hours of complaints surfacing last week. The investigation examines whether platform safeguards adequately prevent harmful AI-generated content distribution.
Ofcom cites Online Safety Act violations in X investigation

(Credit: Getty Images)
Ofcom announced the investigation Monday morning, stating concerns that X failed to prevent “illegal and harmful” deepfake pornography from circulating widely. The regulator received 1,847 complaints since January 5 regarding Grok-generated images depicting celebrities, journalists, and influencers in explicit scenarios without consent.
The communication regulator’s initial statement said,
“There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material (CSAM).”
It stated,
“As the UK’s independent online safety watchdog, we urgently made contact with X on Monday 5 January and set a firm deadline of Friday 9 January for it to explain what steps it has taken to comply with its duties to protect its users in the UK.”
The statement added,
“Ofcom has decided to open a formal investigation to establish whether X has failed to comply with its legal obligations under the Online Safety Act – in particular, to assess the risk of people in the UK seeing content that is illegal in the UK, and to carry out an updated risk assessment before making any significant changes to their service;
- Take appropriate steps to prevent people in the UK from seeing ‘priority’ illegal content – including non-consensual intimate images and CSAM;
- Take down illegal content swiftly when they become aware of it;
- have regard to protecting users from a breach of privacy laws;
- assess the risk their service poses to UK children, and to carry out an updated risk assessment before making any significant changes to their service; and
- Use highly effective age assurance to protect UK children from seeing pornography.
The regulator continued,
“The legal responsibility is on platforms to decide whether content breaks UK laws, and they can use our Illegal Content Judgements Guidance when making these decisions. Ofcom is not a censor – we do not tell platforms which specific posts or accounts to take down.”
Ofcom said,
“Our job is to judge whether sites and apps have taken appropriate steps to protect people in the UK from content that is illegal in the UK, and protect UK children from other content that is harmful to them, such as pornography. The Online Safety Act sets out the process Ofcom must follow when investigating a company and deciding whether it has failed to comply with its legal obligations.
Our first step is to gather and analyse evidence to determine whether a breach has occurred. If, based on that evidence, we consider that a compliance failure has taken place, we will issue a provisional decision to the company, who will then have an opportunity to respond our findings in full, as required by the Act, before we make our final decision.”
Ofcom’s preliminary assessment identified 23,000 individual posts containing deepfake imagery, viewed 14.7 million times before removal. The regulator issued a formal information notice requiring X to disclose content moderation policies, AI guardrails, and user reporting mechanisms within 14 days.
X’s chief safety officer, Kylie McRoberts, acknowledged receipt of Ofcom’s notice, confirming cooperation. The platform disabled Grok’s image generation feature temporarily on January 9 pending internal review.
Victoria Derbyshire reported on X that Ofcom has launched a probe into Grok-generated content, saying,
“Ofcom investigates Elon Musk’s X over Grok AI sexual deepfakes https://www.bbc.co.uk/news/articles/cwy875j28k0o.”
Ofcom investigates Elon Musk’s X over Grok AI sexual deepfakes https://t.co/BbHJM9sDDR
— Victoria Derbyshire (@vicderbyshire) January 12, 2026
Grok AI Deepfake complaints target high-profile figures

(Credit: REUTERS/Dado Ruvic/Illustration/File)
Complainants included Labour MP Jess Phillips, who reported a deepfake video superimposing her face onto explicit footage viewed 2.3 million times.
Sky News presenter Beth Rigby flagged similar content reaching 1.8 million impressions. Reality TV star Olivia Attwood submitted complaints regarding four manipulated images.
Grok’s “Aurora” image generation model, launched in December 2025, utilises unrestricted prompts allowing photorealistic explicit content creation. Users bypassed initial safeguards using creative phrasing, generating 87,000 deepfakes within the first 72 hours per internal X data cited by The Guardian.
Ofcom focused its investigation on systemic failures rather than individual cases. The regulator confirmed no criminal referrals have been made to the Metropolitan Police at this stage.
X platform’s response to the Deepfake content surge
X implemented a 48-hour content removal protocol after initial complaints peaked on January 7. Automated detection systems identified 91 per cent of reported deepfakes, with human moderators handling the remainder.
The platform suspended 1,264 accounts dedicated to deepfake distribution, primarily operating from servers in Russia and Indonesia. X enhanced Grok prompt filters blocking 97 per cent of explicit requests by Sunday evening.
Elon Musk posted on January 10, stating,
“Grok image gen had overzealous guardrails that blocked too much harmless content. Now properly calibrated.”
He referenced 1.2 million harmless images blocked previously.
Peter Kyle defends waiting on Ofcom decision regarding X platform
Business Secretary Peter Kyle, speaking on LBC this morning, explained why the government is awaiting Ofcom’s decision.
As reported by Andrew Sparrow of The Guardian, Kyle said,
“The law requires Ofcom as an independent enforcer and regulator to enforce the law. Now, Ofcom has requested information from X. I believe X has given Ofcom that information and Ofcom is now expediting an inquiry into the behaviour and decisions of X when it comes to operating in the UK market.”
He added,
“Now, at these points in time, Ofcom acts as an enforcer, as an enforcement agency, and it must use those powers to the full extent of the law to keep people safe in this country.”
Online Safety Act 2023 requirements for platforms
Ofcom enforces duties under the Online Safety Act, requiring “highly likely illegal” content removal within hours. Sexual deepfakes qualify as priority Category 2B content mandating proactive risk assessment.
X is classified as a Category 1A platform serving 28 million UK users monthly. Regulator mandated annual risk reports and independent audits commencing April 2025.
Previous Ofcom fines against platforms totalled £18 million since the 2023 enforcement. Meta paid £3.2 million in October 2025 for child safety failures.
Technical details of Grok Aurora image generation

(Credit: Vincent Feuray/Hans Lucas/AFP/Getty Images)
Grok’s Aurora model employs diffusion-based architecture trained on 12 billion public images scraped from X posts. The model generates 1024×1024 photorealistic outputs within 3.7 seconds on average.
Initial safeguards rejected 82 per cent of explicit prompts through keyword blocking. Users circumvented via misspellings, euphemisms, and celebrity name substitutions.
X engineering logs show 14,000 daily deepfake generations peaking on January 6. Model fine-tuning reduced explicit outputs by 94 per cent post-January 9 patch.
High-profile complainant reactions and statements
Jess Phillips told BBC Newsnight,
“Technology outpacing regulation leaves women uniquely vulnerable.”
She called for mandatory watermarking on all AI-generated images.
Beth Rigby posted to 1.4 million followers,
“Deepfakes weaponise misogyny at unprecedented scale,”
demanding criminalisation of non-consensual deepfake creation.
Olivia Attwood told The Sun,
“One click turns you into hardcore porn star without consent.”
She reported content to Gloucestershire Police alongside an Ofcom complaint.
Previous X platform regulatory issues in the UK
Ofcom fined X £120,000 in September 2025 for illegal migration content failures. The platform contested the fine, appealing to the Upper Tribunal.
Meta Platforms received a £3 million penalty in July 2025 for pornography access by minors. TikTok settled a £1.5 million fine for age verification failures.
Ofcom has opened 47 investigations under the Online Safety Act since October 2023. Nineteen resulted in enforcement notices or fines.
Grok AI development timeline and capabilities
xAI launched Grok-1 in November 2023 as a text-only chatbot. Grok-2 added image analysis in June 2025. Aurora image generation debuted on December 9, 2025.
The model supports text-to-image, image-to-image, and inpainting capabilities. Free tier permits 50 generations daily; Premium+ subscribers receive unlimited access.
xAI claims Aurora outperforms Midjourney v7 and Stable Diffusion 3.5 on photorealism benchmarks. The model utilises 314 billion parameters trained on X’s proprietary dataset.
International regulatory responses to Deepfakes
EU Digital Services Act coordinators requested X documentation on January 10. Irish DPC leads a 27-country investigation into Grok compliance.
Australia’s eSafety Commissioner issued a notice on December 30 requiring deepfake safeguards. The Canadian CRTC is monitoring X under the Online Harms Bill C-63.
The US FTC opened an inquiry into xAI advertising practices on January 8. California AG announced deepfake legislation on January 11, mandating disclosure.
X content moderation statistics and transparency report
X removed 5.3 million pieces of illegal content in December 2025, per the latest transparency report. Deepfake detections increased 1,247 per cent month-on-month.
The platform suspended 2.8 million accounts for policy violations. User reports totalled 18.4 million, with 92 per cent actioned within 24 hours.
The UK represents 8.7 per cent of X’s global userbase, accessing 14.2 billion monthly impressions. Premium+ subscribers comprise 1.3 million UK accounts.
Technical mitigation measures implemented by X
Post-January 9 patch rejects 97 per cent of explicit prompts through contextual analysis. Model analyses prompt intent beyond keyword matching.
Watermarking embeds invisible metadata in 100 percent of generated images. Reverse image search integration flags deepfake recirculation.
User limits are reduced to 25 daily generations for the free tier. Premium accounts require explicit consent acknowledgment before NSFW prompts.
Parliamentary and Government reactions
Culture Secretary Lisa Nandy scheduled an Ofcom evidence session for January 18. Digital Minister Peter Kyle requested an xAI UK compliance roadmap.
The Commons Science Committee launched a deepfake inquiry on January 11. Labour MP Alex Davies-Jones tabled a non-consensual deepfake criminalisation bill.
Home Secretary Yvette Cooper announced a £12 million police upskilling programme for AI forensics. The Metropolitan Police created a dedicated deepfake investigation unit.
Impact on UK celebrities and influencers
Comedian Romesh Ranganathan reported three deepfakes viewed 890,000 times. Good Morning Britain host Susanna Reid flagged content reaching 2.1 million impressions.
Influencer Sophia Khan told BBC Woman’s Hour,
“AI porn targeted at ethnic minority women specifically.”
Love Island contestant Arabella Chi submitted a complaint regarding manipulated beach photos.
Victim Support helpline reported a 340 per cent call increase regarding image-based abuse since January 1. Samaritans noted a 28 per cent rise in technology-facilitated harassment queries.
xAI corporate response and Elon Musk statements
xAI chief technology officer Igor Babuschkin posted on January 10, confirming model retraining. Safety director Helen Toner announced independent red-teaming commencing January 15.
Elon Musk responded to Ofcom’s notice by tweeting,
“UK regulators attack free speech while Chinese deepfake apps operate freely.”
Musk referenced 1,400 daily deepfakes generated on mainland China platforms.
xAI committed £25 million to the UK safety research fund, announced in December 2025. The company registered a London office on December 20, housing 42 engineering staff.
Previous Deepfake incidents involving UK figures
Taylor Swift’s deepfake, viewed 47 million times in January 2024, prompted White House response. UK victims included Lucy Connolly and Octavia Peace-Onuoha in August 2025.
Ofcom issued Taylor Swift deepfake guidance in September 2024. Platform removal averaged 72 hours despite priority classification.
Metropolitan Police investigated 1,872 deepfake cases in 2025, achieving 14 arrests. The Crown Prosecution Service secured three convictions under the Communications Act.
Ofcom’s investigation continues without a fixed timeline. X compliance deadline set for January 26.

