Why EU Opens Formal Investigation Into X Over Grok’s Sexualised AI Deepfakes

Feb 17, 2026 - 10:00
 0
Why EU Opens Formal Investigation Into X Over Grok’s Sexualised AI Deepfakes

Quick Answer: Ireland’s Data Protection Commission has launched a large-scale GDPR investigation into X (formerly Twitter) over the Grok AI chatbot’s generation of non-consensual sexualised images of real people, including children. The probe, announced on 17 February 2026, will examine whether X met its obligations on lawful data processing, data protection by design, and impact assessments. It follows a separate EU Digital Services Act investigation opened in January 2026, a UK ICO probe, a French criminal raid on X’s Paris offices, and bans in Indonesia and Malaysia. GDPR fines can reach up to 4 per cent of global turnover.


Ireland’s data protection authority has opened a formal investigation into Elon Musk’s X over the generation of sexualised deepfake images by Grok, the platform’s AI chatbot — marking the most significant privacy enforcement action yet in a scandal that has triggered regulatory responses across three continents.

The Data Protection Commission announced on Monday that it had launched what it described as a “large-scale inquiry” into X Internet Unlimited Company, the entity that operates X’s European business from its headquarters in Dublin. The investigation concerns the apparent creation and publication on X of non-consensual intimate and sexualised images of real people, including children, generated using Grok’s image tools.

DPC Deputy Commissioner Graham Doyle said the authority had been engaging with X since media reports first emerged weeks ago about the ability of users to prompt Grok to generate sexualised images of real individuals. As Ireland hosts X’s European headquarters, the DPC serves as the lead supervisory authority for enforcing GDPR across the EU and European Economic Area — meaning the investigation carries weight for every member state.

The Scale of the Problem

The probe follows research by the Center for Countering Digital Hate estimating that Grok generated approximately three million sexualised images in less than two weeks after its image editing feature launched, including thousands that appeared to depict minors. Users discovered the tool could be prompted to digitally undress real people or place them in explicit scenarios — functionality that Tyler Johnston of AI watchdog The Midas Project described as a “nudification tool waiting to be weaponised.”

The backlash has been global. The UK’s Information Commissioner’s Office opened its own formal investigation into both X and xAI in early February, with ICO executive director William Malcolm warning that the reports raised deeply troubling questions about how personal data had been used to generate intimate images without consent. French prosecutors raided X’s Paris offices as part of a criminal probe examining whether Grok generated child sexual abuse material and Holocaust denial content, and summoned Musk, X CEO Linda Yaccarino, and additional employees for interviews. Indonesia and Malaysia temporarily blocked access to Grok entirely.

What the DPC Investigation Will Examine

The Irish inquiry will assess whether X complied with its fundamental obligations under GDPR in several areas: the principles governing lawful data processing, the legal basis for processing personal data, data protection by design and by default, and the requirement to carry out a data protection impact assessment before deploying high-risk processing operations.

These are not technical footnotes. GDPR requires that any processing of personal data — including using photographs or biometric data to generate synthetic images — must have a lawful basis. The regulation also requires organisations to build privacy protections into their systems from the outset, not bolt them on after a scandal. The penalty for serious breaches can reach up to 4 per cent of global annual turnover or €20 million, whichever is higher.

The DPC investigation adds a second layer of EU regulatory exposure for X. In January, the European Commission opened a separate probe under the Digital Services Act — legislation designed to police content moderation on major platforms — to examine whether X had met its legal obligations in relation to Grok’s outputs. X was already fined €120 million under the DSA in December 2025 for breaches related to advertising transparency and user verification.

The Broader Regulatory Collision

The investigation lands at a moment of intense friction between Brussels and Washington over the regulation of American technology companies. The EU’s enforcement of the AI Act, Digital Markets Act, and Digital Services Act has drawn threats of retaliation from the Trump administration, which has warned of tariffs and restrictions on European companies operating in the US.

Musk himself responded to the initial outcry in January by posting on X that anyone using Grok to make illegal content would face consequences. X subsequently restricted Grok’s image generation to paying subscribers and introduced additional safeguards. However, reports have indicated that it remained possible to generate sexualised content through Grok’s web and mobile applications even after those restrictions were announced.

For the DPC, the case represents a test of whether European privacy law can keep pace with generative AI tools that can produce harmful content at industrial scale. Three million images in two weeks is not a moderation failure. It is a design failure — and that is precisely what GDPR’s data protection by design provisions were written to prevent.

The post Why EU Opens Formal Investigation Into X Over Grok’s Sexualised AI Deepfakes appeared first on European Business & Finance Magazine.