Addressing the issue, Krishnan noted that the concern was only recently brought to the ministry’s attention and is currently under review. “It has come to our notice today, and we are taking necessary actions to address this promptly,” he stated, indicating that regulatory scrutiny of the platform and its AI tools is on the horizon.
Krishnan emphasized that technology platforms cannot avoid accountability by referencing legal protections. “They cannot shirk their responsibility or liability merely by claiming Safe Harbour under section 79(1) of the IT Act,” he asserted, reinforcing the government’s stance that intermediaries must be accountable when their technologies are misused.
These comments arise amid increasing outrage over a trend on X where users have reportedly utilized Grok to alter images of women and children into sexually explicit content. The issue intensified around New Year’s Eve, with manipulated images being widely shared without consent, raising significant alarms regarding online safety, privacy, and AI-facilitated sexual abuse.
In response to the reports of misuse, Grok announced it had limited certain features. “xAI has enforced strict guidelines to prevent Grok from producing explicit or non-consensual content,” the AI tool stated in an automated reply to CNBC-TV18, noting it had “disabled the media feature and encourages reporting of violations via X.”
However, cyber-safety experts have cautioned that these partial restrictions may not suffice, as altered images persist on social media. They have argued that AI-generated image manipulation constitutes a form of sexual violence, especially involving minors, and have called for stronger enforcement and clearer accountability measures for platforms utilizing generative AI tools.
Also Read | Global outrage as X’s Grok morphs photos of women, children into explicit content