The Moment China Pulled Ahead
In the past year, China has quietly pulled ahead of the United States in one of the most strategic corners of artificial intelligence: “open” models whose weights can be downloaded, modified, and deployed by anyone. A new study by the Massachusetts Institute of Technology and Hugging Face finds that Chinese-made open AI models now account for 17% of global downloads, edging out American developers at 15.8% and marking a symbolic shift in who supplies the building blocks of the world’s AI systems.
This is not merely a statistic; it represents a structural transformation in the digital economy. While US giants like Google and OpenAI guarded their secrets behind high-priced subscription walls, Chinese firms like DeepSeek, 01.AI, and Alibaba executed a strategy of “aggressive openness,” flooding the market with high-performance, free-to-use weights—effectively becoming the Android of the AI generation.
It’s important to clarify what “open” means here: open weights or modifiable models, not necessarily fully open-source in the strict legal sense. But the implications are profound. When developers worldwide build their applications on Chinese base models, they’re essentially choosing the foundation of the AI-powered future—and increasingly, that foundation is made in China.
What Are “Open” AI Models—and Why They Matter
Understanding the distinction between open and closed models is key to grasping why this shift matters so much.
Open models make their trained parameters—called weights—publicly available. Developers can download them, modify them, fine-tune them for specific tasks, and deploy them on their own infrastructure. Think of it as getting the recipe and all the ingredients, not just ordering a meal from a restaurant.
Closed models, by contrast, are accessible only through APIs controlled by the companies that created them—OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini. You can use them, but you can’t see how they work or modify them. You’re essentially renting computational intelligence.
Why open models matter:
For startups: Lower costs and faster iteration. Instead of paying $5 per million tokens to OpenAI, they can run Chinese models like DeepSeek-R1 for as little as $0.20 per million tokens—a 25x cost reduction that can make or break a young company’s economics.
For governments: Data sovereignty and local hosting. Countries worried about surveillance or dependency on US technology can deploy these models within their own borders.
For emerging markets: Access without gatekeepers. No need to rely solely on US giants with expensive pricing and restrictive terms of service.
Many Chinese models are released under permissive licenses like Apache 2.0, allowing for commercial use and broad adoption—a strategic choice that builds global developer communities around Chinese technology.
Inside the MIT–Hugging Face Data
The numbers tell a clear story of shifting technological influence.
Core findings:
- Chinese open models: 17% of global downloads in the past year
- US developers: 15.8%
- This marks the first time China has overtaken the US in this critical segment
The analysis covers new open AI models released in the past year, not the entire historical catalog. But the trend is unmistakable and accelerating.
Looking at Hugging Face’s download rankings, Chinese models dominate the top positions. Alibaba’s Qwen (Tongyi Qianwen) series, DeepSeek, Zhipu AI’s GLM, and Baidu’s Ernie consistently appear among the most frequently downloaded models.
Important caveats: Access to Hugging Face is restricted within China, meaning domestic platform usage isn’t captured in these statistics. This suggests the actual usage rate of Chinese models may be significantly higher than these figures indicate.
In cumulative terms, Chinese open models have surpassed 540 million downloads as of October 2025, according to Atom Project analysis of Hugging Face data.
Beyond raw numbers, the quality and capabilities of Chinese models have reached parity with—and in some cases exceeded—their Western counterparts. This isn’t just about quantity; it’s about Chinese developers shipping world-class technology.
China’s Open-Model Playbook
China’s AI companies have adopted what observers call a strategy of “aggressive openness”—a stark contrast to the proprietary, closed approach favored by leading US firms.
Leading Players and Ecosystems
Alibaba Qwen (Tongyi Qianwen) Series
Qwen has become the flagship of China’s open-model strategy. The Qwen family has become the world’s largest open-source AI ecosystem, with over 100,000 derivative models built on it, surpassing Meta’s Llama community.
The latest Qwen3-Coder features 480 billion parameters and is designed for high-performance software development. On the HumanEval benchmark, Qwen-2.5-Coder scores 92%, rivaling the best closed-source models from Anthropic, yet it’s free to download.
Qwen models perform significantly better in languages like Thai, Vietnamese, Indonesian, and Arabic—markets often neglected by US-centric training data, giving them a competitive edge in the Global South.
Alibaba has committed to investing $53 billion over the next three years in cloud computing and AI infrastructure, signaling its long-term commitment to this space.
DeepSeek
The Hangzhou-based startup has emerged as “the biggest dark horse” in the open-source LLM arena in 2025, according to Nvidia senior research scientist Jim Fan.
DeepSeek V3 comes with 671 billion parameters and was trained in around two months at a cost of $5.58 million—a fraction of what US tech giants spend on comparable models. This efficiency demonstrates that resource constraints can drive spectacular innovation.
DeepSeek-R1, their reasoning model, performs at the same level as OpenAI’s o1 while being released as an open-weight model for research and commercial use.
Other Major Players
- Baidu Ernie: Recently made its chatbot free to the public, ahead of schedule
- Zhipu GLM: Strong connections with academia, popular for research applications
- Moonshot Kimi K2: A 1-trillion-parameter Mixture-of-Experts model that currently ranks ahead of Google, Anthropic, and Grok on intelligence leaderboards
- MiniMax: Shanghai-based AI unicorn with strong performance across multiple benchmarks
Fast-Release, Multi-Checkpoint Strategy
Chinese companies build their user base by shipping frequently and quickly, according to Hugging Face chief policy officer Irene Solaiman. This strategy involves releasing models in multiple parameter sizes, frequent updates, and aggressive open-weight releases.
The approach creates a feedback loop: better base models enable more sophisticated applications and research, which in turn drives demand for the next generation of models.
Policy Backdrop
State support has played a crucial role, with the National AI Open Innovation Platform providing shared access to AI datasets and computational tools. Chinese authorities encourage “open” models as a way to build domestic ecosystems and global influence, even as they maintain tight political controls on content.
This represents a seemingly paradoxical strategy: technical openness combined with political control. But it’s proving highly effective at gaining global market share.
Why US Developers Are Losing Share in Open Models
The contrast between Chinese and American approaches couldn’t be starker.
Closed Strategy Dominance
While the United States has focused on closed, proprietary models controlled tightly by companies such as OpenAI, Google and Anthropic, Chinese tech groups have accelerated their release of open models.
OpenAI hadn’t released open-source models since 2020 until finally pivoting strategy in August 2025. CEO Sam Altman conceded that the developer may have been on the “wrong side of history” by maintaining a closed approach.
Meta’s Retreat
Meta initially pushed hard with its Llama series in 2023-2024, with Mark Zuckerberg arguing that “the world would benefit if AI companies shared their technology freely.” The company has since become more cautious, showing signs of moving toward a more restricted strategy.
Still Leading at the Frontier—But Not in Open
To be fair, US firms still dominate frontier, closed models at the very high end. GPT-4o, Claude Opus 4, and Gemini Ultra represent the pinnacle of current AI capabilities.
However, in the open-weight segment, US companies are now outnumbered and out-downloaded. And this matters enormously for the broader ecosystem.
The Startup Reality
Investor analysis suggests around 80% of AI startup pitches in the US now rely on Chinese open-source models. This phenomenon, dubbed “model laundering,” involves taking a Chinese open-source model, stripping its metadata, fine-tuning it on local data, and rebranding it.
A CTO of a Series-B fintech startup in Palo Alto, speaking anonymously, explains: “We tell our VCs we’re using ‘proprietary AI stacks.’ In reality, we’re fine-tuning DeepSeek-R1 because it costs us $0.20 per million tokens to run, whereas GPT-4o costs us $5.00. The math makes the decision for us”.
High-Profile Adoptions
Airbnb CEO Brian Chesky revealed in October that the short-term rental platform had opted for Alibaba’s Qwen over OpenAI’s ChatGPT, praising the Chinese model as “fast and cheap”.
Social Capital CEO Chamath Palihapitiya revealed the same month that his company had migrated much of its work to Moonshot’s Kimi K2 as it was “way more performant” and “a ton cheaper” than models from OpenAI and Anthropic.
Programmers on social media also recently highlighted evidence that two popular US-developed coding assistants, Composer and Windsurf, were built on Chinese models, though the developers haven’t publicly confirmed this.
The Hardware and Sanctions Angle
The irony of US export controls inadvertently accelerating China’s open-source strategy cannot be overstated.
Constraints Breed Innovation
Since 2023, the US government has blocked exports of Nvidia’s most powerful chips (H100/H200) to China. The intended effect was to cripple China’s ability to train frontier models.
However, analysts now suggest this created a “Darwinian pressure cooker” for Chinese software engineers. “American developers got lazy because they had unlimited compute,” one expert explains. Meanwhile, Chinese firms were forced to focus on efficiency—smaller models, better distillation, quantization.
Technical Workarounds
Chinese companies have found several ways to navigate chip sanctions:
- Training some models in offshore data centers to access Nvidia hardware
- Building domestic alternatives through partnerships with Huawei
- Developing efficient training techniques using older-generation chips
Developers such as Beijing-based Z.ai and Hangzhou-based DeepSeek have reported using older-generation chips that are not subject to US export controls, in relatively small quantities, dramatically reducing training and running costs compared with their Silicon Valley rivals.
University of New South Wales AI expert Toby Walsh notes: “The success of these Chinese models demonstrates the failure of export controls to limit China. Indeed, they’ve actually encouraged Chinese companies to be more resourceful and build better models that are smaller and are trained on and run on older generation hardware”.
Global Economic and Geopolitical Implications
China’s lead in open models translates into influence that extends far beyond technology.
Standards Power
By becoming the foundational layer for thousands of apps, Chinese firms set the standards for data formatting and API structures. When many countries and companies build on Chinese models, Chinese defaults and content controls travel with them.
This means AI now joins 5G, apps, and cloud as an arena where Beijing can offer a full “stack” of technology to partner countries—creating dependencies that could prove strategically significant.
Influence in the Global South
In nations like Brazil, India, and Nigeria, where internet bandwidth and expensive cloud credits are barriers, the lightweight, downloadable nature of Chinese models makes them the default choice.
Emerging markets may prefer low-cost, flexible open models from China over US API pricing and restrictions. This creates a bifurcated global AI ecosystem: wealthy countries and enterprises use premium US models, while the rest of the world builds on Chinese foundations.
Adoption Examples Across Industries
Chinese AI tools, including MiniMax’s M2, Z.ai’s GLM 4.6 and DeepSeek’s V3.2, took up seven spots among the 20 models with the most usage last week, according to data from OpenRouter, a platform that connects developers with AI models.
Among the top 10 models used for programming, four were developed by Chinese firms, showing particular strength in developer tools—arguably the most strategic application area.
Industry Leaders Take Notice
Nvidia founder and CEO Jensen Huang has been particularly vocal in his praise, describing LLMs developed by Chinese firms—including DeepSeek, Alibaba, Tencent, MiniMax and Baidu—as “world-class”.
Huang noted: “Don’t forget that open source has many global implications. Not only did the open-source models help the Chinese ecosystem; they are helping ecosystems around the world”.
Risks: Bias, Censorship, and Security
The proliferation of Chinese models comes with significant concerns that shouldn’t be overlooked.
Ideological Bias and Censorship
Studies show that Chinese open models often reflect the government’s viewpoints. They typically avoid sensitive topics like Taiwan or Tiananmen Square and follow official narratives.
When it came to political questions, DeepSeek’s Chinese version mostly refused to answer or followed strict government narratives. This was highlighted during the model’s meteoric rise in early 2025.
These constraints are “baked into the weights” and therefore travel with the models when redeployed abroad, raising concerns for:
- Information integrity in democratic societies
- Corporate and governmental use where political neutrality matters
- Educational applications where balanced perspectives are essential
Data Governance and Security
While the weights are open, the apps built on them often ping back to servers. South Korea recently fined DeepSeek for transferring user data to servers in China without adequate consent.
Additional security concerns include:
- Supply-chain risks in critical applications
- Difficulty of auditing large open models for hidden behaviors
- Potential for embedded surveillance capabilities
- Intellectual property concerns for companies using these models
Nathan Benaich, founder of venture capital firm Air Street Capital, notes: “The biggest factor where this matters is for government and high-stakes enterprise applications where security is paramount. There are natural concerns over what data the model was pre-trained and post-trained on and whether it exhibits behaviors the company wouldn’t want”.
Investment and Policy Watch List
For Investors
Opportunities to watch:
- Chinese cloud providers, AI infrastructure firms, and chip designers benefiting from the open-model boom
- Companies building complementary tools and services around Chinese open models
- Western firms (including US startups) building products on Chinese open models—though this carries dependency risk
Potential risks:
- Dependency risk if export controls tighten
- Geopolitical tensions leading to sudden regulatory changes
- Reputational concerns about using Chinese AI infrastructure
- Sustainability questions around current Chinese pricing strategies
For Policymakers
Key debates:
- Whether to support open-source AI more aggressively in the US and Europe to avoid ceding this layer to China
- Emerging proposals for open-model governance, audits, and possible use of “trusted” model lists
- Balancing innovation benefits of openness against security and censorship concerns
- Reassessing whether export controls are achieving their intended effects
Ahead of releasing open-source models, OpenAI CEO Altman said he was “excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all, and for wide benefit”, signaling a potential shift in US strategy.
What to Watch
- Next MIT/Hugging Face reports: Will the quarterly trend continue?
- Regulatory moves: The US Commerce Department is debating whether to restrict Americans from downloading specific foreign code repositories—though legal experts say this would be nearly impossible to enforce
- New flagship Chinese models: Qwen next generation, DeepSeek V4, and others in the pipeline
- US response: Rumors of Meta accelerating Llama 4 release; OpenAI’s new open-source strategy
Conclusion: The Dawn of a New AI Order
China’s overtaking of the United States in global open AI model downloads is not a statistical anomaly. It represents a structural shift in how AI technology is developed, distributed, and deployed worldwide.
As Adina Yakefu, an AI researcher at Hugging Face, observes: “The collective shift towards open source among Chinese AI companies is more than symbolic; it reflects a growing consensus that open source accelerates iteration, builds trust, and expands global influence”.
The transformation appears irreversible. The question is no longer whether open-source AI will challenge proprietary models, but whether the West can compete with China’s collaborative, accessible approach to artificial intelligence development.
For investors, policymakers, and businesses, the message is clear: the global AI order is being reorganized, and China’s open-model strategy sits at its center. Understanding and adapting to this shift will determine success in the years ahead.
The geopolitical implications extend beyond technology. As one analyst put it, we’re witnessing the emergence of two parallel AI ecosystems—one expensive and closed, centered on US tech giants; the other cheap and open, increasingly dominated by Chinese developers. Which will ultimately prove more influential may depend less on which produces the most powerful individual models, and more on which succeeds in becoming the foundation for the majority of AI applications worldwide.
Based on current trends, that foundation is being laid in China.

