OpenAI, Anthropic, and Google Unite to Tackle Model Imitation in China

OpenAI, Anthropic, and Google Unite to Tackle Model Imitation in China
Rivals OpenAI, Anthropic PBC, and Alphabet Inc.’s Google have started collaborating to address the challenge posed by Chinese competitors that leverage advanced US artificial intelligence models to gain an advantage in the global AI landscape.

The companies are exchanging information through the Frontier Model Forum, an industry nonprofit established by the three tech giants alongside Microsoft Corp. in 2023, to identify so-called adversarial distillation attempts that breach their service agreements, according to sources familiar with the situation.

This unusual collaboration highlights a significant concern among US AI companies: that certain users, particularly in China, are creating imitation versions of their products, potentially undercutting prices and siphoning away customers while also presenting a national security threat. Estimates from US officials suggest that unauthorized distillation could cost Silicon Valley companies billions in annual profits, as reported by a source familiar with these findings who spoke on the condition of anonymity.
OpenAI has confirmed its involvement in the information-sharing initiative regarding adversarial distillation through the Frontier Model Forum and referenced a recent memo sent to Congress on the subject, where it accused the Chinese company DeepSeek of attempting to “free-ride on the capabilities developed by OpenAI and other US frontier labs.” Google, Anthropic, and the Frontier Model Forum have opted not to provide comments.

Distillation refers to a method in which an older “teacher” AI model is employed to train a newer “student” model, enabling it to replicate the functionality of the former—often at a significantly lower cost than developing a brand-new model from scratch. Certain forms of distillation are widely accepted and even encouraged by AI laboratories, such as when firms create more compact, efficient versions of their own models or permit external developers to utilize distillation to craft non-competitive technologies.

However, distillation has stirred controversy when employed by third parties—particularly in countries like China or Russia— to replicate proprietary work without consent. Leading US AI laboratories have cautioned that foreign adversaries might exploit this technique to fabricate AI models devoid of safety measures, such as restrictions preventing users from generating hazardous pathogens.

Also Read: OpenAI urges California, Delaware to investigate Musk’s ‘anti-competitive behaviour’

Most models produced by Chinese laboratories are open weight, which means segments of the underlying AI system are publicly accessible for users to download and operate on their platforms at a lower operational cost. This presents an economic challenge for US AI companies that have maintained proprietary models, banking on the premise that customers will pay for access to their offerings to help recoup the hundreds of billions spent on data centers and related infrastructure.

Distillation first garnered significant scrutiny in January 2025 following DeepSeek’s unexpected launch of the R1 reasoning model, which made a substantial impact in the AI sector. Shortly thereafter, Microsoft and OpenAI investigated whether the Chinese startup had improperly extracted large volumes of data from the US company’s models to develop R1, as reported by Bloomberg.

In February, OpenAI alerted US lawmakers that DeepSeek had employed increasingly sophisticated methods to extract results from its models even amid intensified efforts to curb misuse of its products. In a memo to the House Select Committee on China, OpenAI asserted that DeepSeek was utilizing distillation to create a new version of its innovative chatbot.

The information-sharing among US AI companies regarding adversarial distillation reflects a conventional practice in the cybersecurity domain, where companies routinely exchange data on attacks and adversarial tactics to bolster network defenses. By collaborating, the AI companies seek to more effectively identify such practices, ascertain accountability, and attempt to thwart unauthorized users from succeeding.

Trump administration officials have expressed their willingness to promote information exchange between AI companies to mitigate adversarial distillation. The AI Action Plan unveiled last year by President Donald Trump called for establishing an information-sharing and analysis center partly for this purpose.

As it stands, information sharing about distillation remains limited due to AI companies’ uncertainties regarding what can be shared under existing antitrust guidelines to combat the competitive threats from China, according to insiders. These companies would benefit from increased clarity from the US government, the sources indicated.

Distillation has been a primary concern for American AI developers ever since DeepSeek shook global markets in early 2025 with its R1 launch. Highly capable open-source models continue to proliferate in China, and many industry observers are closely monitoring for significant updates to DeepSeek’s model.

Last year, Anthropic restricted access to its Claude chatbot model for companies controlled by Chinese interests, and in February, it identified three Chinese AI laboratories—DeepSeek, Moonshot, and MiniMax—as illicitly extracting the model’s capabilities via distillation. This year, Anthropic stated that the threat “extends beyond any single company or region” and poses a national security risk, as distilled models often lack safety measures meant to deter misuse of AI tools for harmful activities.

Google has published a blog stating that it has observed a rise in attempts at model extraction. The three US AI laboratories have yet to provide evidence detailing how much of China’s model innovation relies on distillation, but they do note that the frequency of attacks can be gauged by assessing the volume of large-scale data requests.

Previous Article

DSP Asset Managers names Anish Tawakley as Chief Investment Officer.

Next Article

UK Government Faces Growing Calls to Ban Kanye West Following Festival Controversy