Altered Images Can Lead to Significant Damage: Sridhar Vembu Emphasizes the Necessity of AI Labeling Regulations

Altered Images Can Lead to Significant Damage: Sridhar Vembu Emphasizes the Necessity of AI Labeling Regulations
The government’s impending requirement for labelling AI-generated content is an essential regulation, as altered or manipulated images can inflict significant harm on individuals, stated Zoho founder and Chief Scientist Sridhar Vembu, who expressed his full support for the proposed AI labelling guidelines.

This remark from one of India’s leading tech figures comes as the government proposes revisions to IT regulations, which necessitate clear labelling of AI-generated content and enhance the accountability of platforms and other stakeholders in verifying and flagging synthetic information.

”This regulation is absolutely necessary, as altered images can cause considerable damage. I completely back this initiative,” Vembu told PTI.
The Indian government’s initiative to mandate the labelling of AI-generated content aims to enable users to critically assess such material and ensure that synthetic outputs do not misrepresent the truth, IT Secretary S Krishnan recently stated at an event, noting that the rules are close to being finalized.

This initiative, designed to mitigate user harm from deepfakes and misinformation, aims to impose responsibilities on two primary groups in the digital landscape: the providers of AI tools like ChatGPT, Grok, and Gemini, as well as social media platforms.

The draft regulations stipulate that companies must label AI-generated content with prominent markers and identifiers, covering at least 10 percent of the visual display or the first 10 percent of an audio clip’s duration.

The IT Ministry previously emphasized that the viral spread of deepfake audio, videos, and synthetic media on social platforms demonstrates how generative AI can create ”convincing falsehoods,” which can be ”weaponized” for spreading misinformation, damaging reputations, manipulating or influencing elections, or committing financial fraud.

The concern surrounding deepfakes and AI-related harm has been amplified following the recent controversy regarding Elon Musk-owned Grok allowing users to generate inappropriate content. Users raised alarms about the AI chatbot’s alleged misuse to ’digitally undress’ images of women and minors, sparking serious concerns over privacy violations and platform accountability.

In the days and weeks that followed, pressure mounted on Grok from governments across the globe, including India, as regulators intensified scrutiny over the generative AI engine concerning content moderation, data security, and non-consensual sexually explicit images.

The microblogging platform has since implemented technological measures to prevent Grok from generating images of real individuals in revealing clothing in jurisdictions where such actions are illegal.

On January 2, the IT Ministry reprimanded X and instructed it to immediately eliminate all vulgar, obscene, and unlawful content produced by Grok, warning of possible legal action if compliance was not met.

”Any violation of someone’s privacy or any assault on it must be regulated. We will evolve, but our system is designed to respond swiftly to these issues,” Vembu said.

Previous Article

ECL, a subsidiary of Coal India, encourages staff to boost production levels.

Next Article

Northern Railway to operate special trains between Katra and Srinagar on January 27–28 due to snowfall.