Rohit Kumar, Founding Partner at The Quantum Hub, describes the framework as “not sustainable” and suggests it is “likely to fail strongly in implementation,” despite the government’s efforts to strike a balance between innovation and creator rights.
The government’s working paper, currently available for public comments for 30 days, proposes a centralized body to collect royalties from AI companies and distribute them to creators. The royalty rate would be determined as a percentage of a model’s global revenue, applying only once an AI system is commercialized. The plan also suggests retroactive payments for past data usage — a pioneering move on a global scale.
However, Kumar contends that India may be overstepping in its attempt to regulate an area where no country has established a workable solution.
“India appears to be among the first to undertake this. Globally, various models have been tested because unfortunately, no one has found the definitive answer,” he stated. “The committee has made a significant effort to reconcile competing interests — innovation on one side and copyright protection on the other. Yet, the solution they devised is not particularly feasible. It is not sustainable.”
‘Global revenue’ metric could undermine the model
A key flaw, according to Kumar, is the decision to base royalties on an AI model’s global revenue, irrespective of the contribution of Indian content to its training.
“Models are developed internationally, and some may have trained on minimal Indian content, yet we are linking global revenue to determine payouts,” he remarked. “In certain instances, models may not even operate independently. For example, Gemini has been incorporated into search. How do you ascertain what portion of search revenue should be attributed to Gemini? And on that basis, how do you determine what share should go to Indian creators?”
Such attribution challenges, in his opinion, could make the entire framework impractical.
Blanket mandatory licensing introduces its own risks
To ensure that India’s AI models access large, diverse datasets — particularly those pertinent to local contexts, languages, and applications such as agriculture — the proposal suggests a mandatory blanket license with no opt-out for creators.
While this aims to mitigate holdout scenarios that could obstruct model development, it presents its own set of risks.
As Kumar points out, a universal license diminishes creators’ agency, restricts individual negotiations, and assigns uniform value to all content, which does not reflect market realities. Major publishers that could typically command premium commercial agreements may particularly feel disadvantaged.
“A blanket rate fails to reflect the true value of diverse copyrighted materials,” he clarified. “Some creators who can currently demand more may find themselves undervalued, as their work may not receive recognition akin to what it would in a free market.”
Policy may end up not satisfying anyone
Kumar believes that the committee attempted a compromise: moderating commercial benefits for larger creators while enhancing access and potential revenue for smaller, unorganized creators who lack bargaining power. However, he warns that this approach might ultimately disappoint both sides.
“I believe the solution they have proposed is unlikely to satisfy either party,” he stated. “There are too many operational challenges and numerous moving components. The policy, as it stands, is likely to face significant implementation issues.”
Also Read | India moves to make AI firms pay up for the content they learn from
A global first — and a global risk
India’s proposal is notable as one of the earliest efforts worldwide to establish a structured licensing system for AI training data. However, the absence of global benchmarks also means there are no tested models to reference. Kumar cautions that proceeding without addressing core design flaws might introduce uncertainty for both the creator ecosystem and the rapidly growing AI sector in the country.