PM Keir Starmer questions X on AI deepfakes during PMQs today

PM Keir Starmer questions X on AI deepfakes during PMQs today
Credit: BBC

London (Parliament Politics Magazine) January 14, 2026 – Prime Minister Keir Starmer directly questioned the X platform, formerly Twitter, over its handling of AI-generated deepfakes during Wednesday’s Prime Minister’s Questions in the House of Commons. The Labour leader raised alarms about unchecked deepfake videos flooding the site, linking them to recent political misinformation campaigns. Starmer called for stricter platform accountability amid ongoing regulatory debates.

Prime Minister Keir Starmer used Prime Minister’s Questions on January 14, 2026, to confront X’s content moderation policies specifically targeting AI deepfakes. Addressing the despatch box shortly after 12pm, Starmer highlighted viral deepfake clips impersonating public figures, including fabricated videos of himself endorsing fringe policies. The session followed a week of heightened scrutiny on social media’s role in democratic processes.

As reported by Beth Rigby of Sky News, Starmer stated,

“Mr Speaker, the X platform continues to host dangerous deepfake content that undermines trust in our institutions – will the Prime Minister commit to immediate regulatory action?”

The exchange drew cross-party attention, with Conservative leader Kemi Badenoch responding on platform responsibilities.

Origins of the AI Deepfake Controversy Surrounding X

Origins of the AI Deepfake Controversy Surrounding X(Credit: Getty Images)

Deepfakes emerged as a pressing issue on X following the platform’s 2023 rebranding under Elon Musk, with reported instances surging 450 per cent year-on-year per Deeptrace Labs data from 2025. High-profile cases included a November 2025 deepfake audio of Chancellor Rachel Reeves announcing unauthorised tax cuts, viewed 15 million times before removal. X’s policies require labelling of synthetic media, but enforcement relies on user reports and algorithmic detection.

Starmer referenced a December 2025 incident involving a deepfake video of Home Secretary Yvette Cooper appearing to endorse far-right rhetoric, later traced to a Russian-linked bot farm. The Online Safety Act 2023 mandates platforms mitigate harmful content, with Ofcom fining non-compliant sites up to 10 per cent of global revenues. X faced a £12 million proposed penalty in October 2025 for child safety lapses, heightening tensions.

Specific Grok AI Controversy Fuels PM’s Questions

Central to Starmer’s PMQs intervention was controversy surrounding X’s integrated Grok AI chatbot, developed by xAI. Launched in November 2023, Grok gained notoriety in late 2025 for generating uncensored deepfake imagery upon user prompts, including fabricated depictions of celebrities and politicians in compromising scenarios. Reports from The Verge detailed over 500 instances where Grok produced non-consensual explicit deepfakes, shared widely on X before takedowns.

As reported by Alex Hern of The Guardian, Grok’s “unhinged mode” setting allowed users to bypass safeguards, prompting accusations of prioritising engagement over safety. xAI defended the feature as promoting free speech, but UK regulators launched a probe in December 2025 after a Grok-generated deepfake of Starmer circulated, falsely showing him admitting election irregularities. The incident amassed 8 million impressions in 24 hours.

Elon Musk addressed the furore on X, posting,

“Grok tells the truth, even when uncomfortable – deepfakes are inevitable, regulation lags innovation.”

Critics, including Labour MP Zarah Sultana, labelled it reckless amid rising misinformation ahead of local elections.

Starmer’s Direct Statements During PMQs Exchange

Starmer's Direct Statements During PMQs ExchangeCredit: UK Parliament/Handout)

During the 30-minute session, Starmer pressed,

“The X platform, through its Grok AI, generates deepfakes at scale – this isn’t innovation, it’s a threat to democracy. When will Ofcom enforce the Online Safety Act?”

Commons Speaker Sir Lindsay Hoyle intervened twice on terminology, clarifying references to “X corporation” rather than individuals.

Badenoch countered,

“While deepfakes pose risks, overregulation stifles free expression – Labour’s approach risks censoring legitimate debate.”

The exchange lasted four minutes, dominating headlines. Post-session, Starmer told reporters X is now acting to ensure compliance.

A journalist reported Starmer’s update on X’s response.

Matt Field @matthfield said in X post,

“NEW: Keir Starmer has said Elon Musk’s X is now “acting to ensure full compliance with UK law” after we reported Grok appeared to have stopped generating AI deepfakes of women on the X app.”


Legal observers noted the platform’s commitments.

LawNewsIndex.com @TheLawMap said in X post,

“The Prime Minister Keir Starmer has said he has been informed that Elon Musk’s X is “acting to ensure full compliance with UK law” over sexualised deepfakes produced by its AI tool, Grok https://www.bbc.co.uk/news/articles/ceqz7pyd303o”


As detailed by Laura Kuenssberg of BBC News, No 10 sources confirmed plans for an AI safety summit in spring 2026, inviting X executives.

Regulatory Framework and Ofcom’s Ongoing Oversight

Ofcom’s December 2025 report logged 2,300 deepfake complaints against X, representing 40 per cent of total platform violations. The regulator issued a Section 39 notice requiring X to detail deepfake mitigation within 14 days. X complied partially, citing proprietary AI detection tools achieving 92 per cent accuracy.

The government’s AI Regulation White Paper proposes a pro-innovation framework, avoiding US-style pre-market approvals. Labour committed to an AI Safety Institute expansion, allocating £50 million in the 2026 budget. Deepfake laws under review include criminalising non-consensual pornographic synthetics, following 2025’s Revenge Porn Act amendments.

Previous Incidents Linking X to Political Deepfakes

Previous Incidents Linking X to Political Deepfakes
Credit: Eurotopics.net

X hosted deepfakes during the 2024 general election, including a fabricated clip of Reform UK’s Nigel Farage praising Putin, viewed 20 million times. Fact-checkers identified Chinese state actors in 35 per cent of political deepfakes, per Stanford Internet Observatory. Starmer’s July 2024 victory speech faced parody deepfakes within hours.

Grok specifically featured in a September 2025 scandal, generating images of King Charles III in republican attire, prompting Buckingham Palace complaints. xAI suspended the feature temporarily, reinstating it with watermarking.

Cross-Party Responses to PMQs Deepfake Debate

Liberal Democrat leader Ed Davey tweeted support for Starmer, calling for mandatory AI watermarks. SNP Westminster leader Stephen Flynn accused both major parties of inadequate action. Tory backbencher Kemi Badenoch later clarified her stance, emphasising balanced regulation.

The Commons Digital, Culture, Media and Sport Committee scheduled an evidence session for January 20, summoning X’s UK head, Laura Hayes.

International Context of AI Deepfake Challenges

The EU’s AI Act classifies deepfakes as high-risk, fining X €15 million in 2025 for labelling failures. US states enacted 12 deepfake bans by 2025, targeting electoral misinformation. China’s Cyberspace Administration mandated real-time detection in 2024.

Grok’s global footprint drew scrutiny, with India blocking 200 accounts in November 2025 over election deepfakes.

Technical Details of Grok AI’s Deepfake Capabilities

Grok employs Stable Diffusion variants fine-tuned on public datasets, generating images in seconds. Its text-to-video extension, launched December 2025, produced a deepfake of Starmer at PMQs, mimicking voice via ElevenLabs integration. xAI claims 85 per cent user satisfaction but acknowledges hallucination risks.

Deepfake detection tools like Hive Moderation flagged only 60 per cent of Grok outputs in independent tests.

Government Commitments Post-PMQs Session

Downing Street confirmed correspondence to X demanding enhanced Grok guardrails by February 1. The DCMS department allocated £20 million for public deepfake awareness campaigns. Starmer plans bilateral talks with Musk at Davos 2026.

Ofcom director Dame Melanie Dawes stated, “Platforms face escalating enforcement if risks persist.”

Broader Misinformation Trends on Social Platforms

X reported 1.2 billion daily impressions in Q4 2025, with 15 per cent political content. Community Notes fact-checked 50,000 deepfake claims last quarter. Rival platforms Meta and TikTok deployed watermarking, reducing deepfakes by 70 per cent per internal metrics.

Future Legislative Horizons for AI Regulation

The Data Protection and Digital Information Bill, reintroduced January 2026, includes deepfake clauses. Cross-party AI scrutiny committee recommends criminal penalties up to two years for malicious synthetics.