The trend began a few days ago, escalating on New Year’s Eve, and has rapidly spread across the platform. Users have been seen issuing direct commands to Grok to digitally manipulate pictures of women and children, transforming ordinary photographs into disturbing and abusive content. These images have been widely circulated without consent, subjecting victims to humiliation, harassment, and harm.
Activists advocating for women’s rights and users worldwide are pressing Elon Musk to promptly address the feature that enables such misconduct. Although X has reportedly restricted Grok’s media feature, the misuse continues unabated. Images can still be altered, shared, and accessed on the platform.
The issue has now reached Indian users on X, with experts warning that the problem extends far beyond mere online pranks or trolling. Cyber-safety experts and gender-rights advocates assert that image morphing using AI constitutes a form of sexual violence, especially when it involves women and children. They claim that these actions violate dignity, bodily autonomy, and consent, potentially inflicting severe psychological trauma on victims whose images are weaponized without their consent.
The ongoing availability of altered images on X, despite partial restrictions, has amplified criticism that the platform is not doing enough to safeguard its users. Concerned women are deleting their photos.
Cyber-security expert Ritesh Bhatia stated to CNBC-TV18, “Why are we placing the onus on victims to exercise caution? This isn’t merely about being careful; it’s about accountability. When a platform like Grok permits such commands, the responsibility rests with the intermediary. Technology is not neutral when it acts on harmful requests. If a system can be directed to violate dignity, the failure lies not just in human behavior but also in design, governance, and ethical oversight. The creators of Grok need to take immediate action.”
Regarding legal remedies, cyber-law expert Adv. Prashant Mali remarked to CNBC-TV18, “I consider this not as mischief — it is AI-enabled sexual violence. Victims have explicit recourse under the IT Act, 2000, particularly Sections 66E (violation of privacy) and 67/67A (publishing or transmitting obscene or sexually explicit content), which clearly address AI-generated morphed images regardless of any physical act.
“Under the Bharatiya Nyaya Sanhita, 2023, Section 77 (voyeurism) and related provisions on sexual harassment and the dignity of women criminalize the creation and dissemination of such material, acknowledging the harm to autonomy beyond mere physical exposure. In cases involving minors, POCSO is invoked immediately, with Sections 11, 12, 13, and 14 categorizing AI-generated sexualized images as aggravated sexual exploitation, irrespective of ‘virtual’ justifications, ensuring swift and mandatory punishment. Additionally, the Intermediary Rules require prompt takedown and traceability.”
He further noted, “The legal framework exists robustly on paper. The real challenge is the speed of enforcement and digital-forensics capabilities, not a lack of laws. I believe the defense of ‘it was just an AI’ will not withstand judicial scrutiny.”
As demands for accountability increase, activists are calling for stricter regulations on AI image tools, prompt takedown procedures, and legal action against those producing and sharing abusive content. The Grok controversy has once again highlighted the troubling aspects of generative AI and raised urgent concerns about whether social media platforms are equipped — or willing — to stop technology from being used as an instrument of sexual harm.