Elon Musk’s Grok Limits ‘Sexualizing’ Image Feature to paid Users

0
Elon Musk with Donald Trump in happier times !

Elon Musk with Donald Trump in happier times (Image X.com)

Spread love

Grok AI sexualised images row deepens as Elon Musk’s X restricts image generation to paid users after global outrage over deepfakes, child safety, and regulatory crackdowns in the UK and India.

By TRH Tech Desk

New Delhi, January 9, 2026 — Facing global backlash for sexualising images of women and even minor girls, Elon Musk’s AI tool Grok has reportedly limited the facility for “paid users.” Channel 4 reported that “Elon Musk’s AI tool Grok has limited its image generation features to paid users, after it was used to create sexualised deepfakes and partially undressed images of children.”

“The change means users requesting images will have their names and payment information on file,” reported the UK-based broadcaster. It stated that UK Prime Minister Kier Starmer had called for X to get its act together, saying the government was keeping all options on the table.

“In a statement, X, formerly Twitter, said they take action against illegal content and work with law enforcement where necessary,” reported the broadcaster.

India had also raised the issue of the sexualising feature of the AI tool of Grok. he Ministry of Electronics and Information Technology (MeitY) has issued a formal notice to X Corp. (formerly Twitter), citing alleged failure to comply with statutory due diligence obligations under Indian law in relation to the misuse of its AI-based service Grok for generating obscene and sexually explicit content.

In a letter issued on Friday, addressed to X’s Chief Compliance Officer for India operations, MeitY said it had received repeated reports and representations—including from parliamentary stakeholders—alleging that Grok AI was being misused to generate, publish and circulate obscene images and videos of women, often through fake accounts and synthetic outputs.

The ministry said such content allegedly includes nudity, sexualisation and indecent depictions, and in some cases targets women who have uploaded legitimate images or videos, which are then manipulated using AI prompts.

BBC World also reported this week that “the Internet Watch Foundation (IWF) charity analysts discovered criminal imagery of girls aged between 11 and 13 which appears to have been created by using Grok.”

“The IWF said it found sexualised and topless imagery of girls on a dark web forum in which users claimed they used Grok to create the imagery,” added BBC World in a report.

MeitY Flags Obscene AI Content on X, Seeks ATR on Grok

Follow The Raisina Hills on WhatsApp, Instagram, YouTube, Facebook, and LinkedIn

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from The Raisina Hills

Subscribe now to keep reading and get access to the full archive.

Continue reading