Nudification blocks: The EU’s struggle for more safety online
The European Union is bringing out the big guns in the face of widespread outrage over the use of Artificial Intelligence (AI) violating people’s privacy and dignity.
Brussels is considering the possibility of classifying the creation of sexual deepfakes as a prohibited practice under the Artificial Intelligence Act following the scandal involving sexualised images created by Grok AI, the chatbot integrated into Elon Musk’s X platform.
Grok outrage
Musk’s company xAI – after prolonged international criticism – has introduced new restrictions in mid-January on sexually suggestive AI-generated images in Grok. The move follows criticism that Grok allowed users to digitally replace women’s clothing with bikinis and, in some cases, create sexualised depictions of minors.
The first images of people being stripped naked without consent (“nudification”) began circulating in the days following the release of the feature, but their spread increased particularly around New Year’s Eve. According to CNN, between January 5 and 6 alone, Grok was used to generate at least 6,700 sexual images. These often involved women or minors.
“Grok is now offering a ‘spicy mode’ showing explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling,” EU digital affairs spokesman Thomas Regnier told reporters back then.
The European Commission, which acts as the bloc’s digital watchdog, said it would take note of new measures taken by X and would review them. Officials warned that if the steps prove insufficient, the EU will consider using the full scope of its Digital Services Act (DSA).
European Commission Vice-President Henna Virkkunen has stated that the Commission is considering explicitly banning these types of AI-generated sexual images under the AI Act, classifying them as unacceptable risks.
The prohibition of harmful practices in the field of AI could be relevant to addressing the issue of non-consensual sexual deepfakes and child pornography, said Virkkunen, who is also the responsible commissioner for Tech Sovereignty, Security and Democracy, at this week’s plenary session of the European Parliament in Strasbourg. She also said the DSA mitigated the risk of online sexual material being disseminated without consent.
She also recalled that the Commission had sent a request for information to X regarding Grok as part of its investigation into the platform under the DSA.
The platform was asked to preserve all internal documents and data relating to it until the end of the year. “We are now examining the extent to which X may in any case be in breach of the DSA and will not hesitate to take further action if the evidence suggests it,” she said.
The Commission had previously stepped up pressure on X, which was fined 120 million Euro in early December over transparency violations. The EU has insisted it will enforce its rules despite causing backlash from the US administration.
“The DSA is very clear in Europe. All platforms have to get their own house in order, because what they’re generating here is unacceptable, and compliance with EU law is not an option. It’s an obligation,” Regnier said at the height of the scandal in early January.
Last week, a group of about 50 MEPs had called on the Commission to ban artificial intelligence apps used to create nude images from the EU market.

Can’t live without X
Despite criticism of X, nearly all senior EU officials continue to post there rather than on European alternatives, according to research by dpa.
European Commission President Ursula von der Leyen and other top officials still do not have official accounts on Mastodon, a Germany-based alternative. Virkkunen opened an official Mastodon account in January. High-ranking EU politicians are also active on Bluesky, another US-based platform currently gaining traction.
The Commission justifies continued use of X due to its reach: Mastodon has roughly 750,000 monthly users, compared with 100 million on X, according to the companies.
The long legal path towards better safety online
The path towards protection of minors in the EU is long-winded as concerns of privacy, business and protection clash. Several regulations intersect:
- Chat Control
The Commission in 2022 proposed a regulation to require platforms to detect and report images and videos of abuse (child sexual abuse material or CSAM), as well as attempts by predators to contact minors.
Supported by several child protection groups, the plan nicknamed “Chat Control” sparked fierce privacy debates inside the 27-country bloc and led to accusations of mass surveillance.
The final legislation is expected to be negotiated in early 2026, aiming to bridge the gap between Parliament’s privacy-focused approach and the EU Council’s desire for broad voluntary scanning powers.
While extending temporary voluntary scanning measures until April 2026 to avoid a legal vacuum, MEPs have expressed urgency for a permanent solution.

- DSA
The European Union uses the Digital Services Act to sanction online platforms by imposing massive fines, requiring immediate operational changes, and – as a last resort – temporarily suspending their services. It can apply fines if platforms violate DSA obligations, fail to comply with interim measures or breach commitments.
It is an EU regulation for a safer online world, requiring platforms to tackle illegal content, protect users, and increase transparency.
- AI Act
The AI Act was adopted in 2024 and is the world’s first and only comprehensive legal framework in the artificial intelligence context. It establishes a risk-based system to regulate AI technologies within the EU, aiming to ensure they are safe, trustworthy, and respect fundamental rights while fostering innovation.
It bans certain unacceptable AI practices, such as social scoring, and sets rules for areas of high risk for AI use – like in critical infrastructure or employment. It also sets out restrictions on manipulative AI uses such as deepfakes targeting children, among others.
- Social media bans
France, which is considering banning social media for children under 15, has been testing an age verification app developed by the European Commission since this summer. This tool is one of several methods used to verify the age of internet users, which is a headache for tech giants and authorities alike.
Individual efforts
The Spanish Minister of Youth and Children, Sira Rego, in early January asked the attorney general’s office to investigate whether Grok may be committing crimes related to the dissemination of child sexual abuse material.
Currently, Spain is developing its own law for the protection of minors in digital environments. The law strengthens the framework for protecting personal integrity and privacy against new forms of violation linked to the use of technologies such as artificial intelligence, reaffirming that the best interests of the child must always prevail over any digital business model.
Bulgaria has stepped up efforts to combat online child sexual abuse through international law enforcement cooperation, national prevention campaigns and policy discussions aligned with EU legislation. In 2025, Bulgarian authorities took part in a major international operation that shut down Kidflix, one of the world’s largest platforms for child sexual exploitation, used between 2022 and 2025 by nearly 2 million users.
Romania has legislative mechanisms in place to combat child sexual abuse material through its criminal code, and the authorities are seeking to expand and modernise these rules.
Since 2025 an important bill on the protection of children online (called the Online Age of Majority Law) has been under parliamentary debate, and Romania is also gradually participating in and implementing EU rules on the prevention and combating of online sexual abuse. The Online Age of Majority Law introduces mandatory age verification and parental consent for minors (under 16) to access online services such as social media, gaming and streaming platforms.
EU candidate country Bosnia and Herzegovina still has no specific law regulating this area. In BiH, criminal liability for the production, distribution and possession of such material is based on criminal laws that cover the sexual exploitation of children, but do not contain explicit provisions on AI-generated or simulated content.
The EU has put into place a set of complementary tools and measures to protect its citizens – young and old – from harmful practices online, but weak points include challenges in enforcement, algorithmic amplification of harm, inconsistent national implementation, and debates over balancing security with privacy.
This article is an ENR Key Story. The content is based on information published by ENR participating agencies.