After international backlash over deepfake misuse, Elon Musk’s Grok AI has limited its image generation and editing features to paying subscribers, sparking criticism from governments and regulators. The move follows reports that the platform was being used to create sexualized deepfake images of women and children, raising serious concerns over abuse, privacy violations, and child exploitation.
According to an announcement on Musk’s social media platform, X, non-paying users can no longer generate or modify images, while subscribers must provide payment and personal information to access the feature.
Critics argue that placing the feature behind a paywall does little to address the underlying harm. A spokesperson from Downing Street described the decision as “insulting” to victims of misogyny and sexual violence, warning that monetizing the capability fails to prevent abuse.
Similarly, the European Union emphasized that restricting access does not resolve the core problem of preventing illegal content. “Paid or free, these images should not exist”, said EU digital affairs spokesman Thomas Regnier.
Regulatory scrutiny has intensified globally. The European Commission has ordered X to preserve all internal records and data related to Grok until the end of 2026. Other countries, including France, Malaysia, and India have publicly criticized the platform for facilitating AI-generated nude content.
Musk has warned that users who generate illegal content using Grok will face consequences. X’s official Safety account confirmed that the platform removes illicit content, suspends offending accounts, and cooperates with authorities to combat AI-enabled abuse.