The AI tool Grok on social media platform X has reportedly concealed its media feature used for generating images in response to reports of users creating explicit or non-consensual content.
“xAI has established stringent guidelines to stop Grok from producing explicit or non-consensual material,” the tool stated in an automated reply to a CNBC-TV18 post.
“Reports suggest that some misuse continues, resulting in backlash. We’ve disabled the media feature and urge users to report any violations through X,” Grok informed CNBC-TV18.
Concerns have risen regarding a troubling trend on X, where users request Grok to alter photos of women and children into sexually explicit images. These manipulated photos have been shared without consent, putting victims at risk of harassment and shame.
Activists are urging Grok’s creator, billionaire entrepreneur Elon Musk, to take responsible action.
Cyber-safety specialists contend that partial restrictions are inadequate, as altered images still circulate on the platform. They have also cautioned that AI-driven image manipulation constitutes sexual violence, particularly concerning minors, and have advocated for stricter enforcement and accountability from the platform.
Also Read: Tech giants gear up for AI, chips and hardware push in 2026