← Back to Blog
EN2026-02-01

One Year After DeepSeek: The $6 Million Model That Rewired the AI Industry

In January 2025, DeepSeek-R1 wiped $1 trillion from the stock market and shattered the assumption that building frontier AI required billions. One year later, we look at what actually changed — and what didn't.

By intelliBrain
AIOpen SourceDeepSeekIndustry Analysis

The Day Everything Changed (Or Didn't)

On January 20, 2025, a Chinese AI lab called DeepSeek released its R1 reasoning model. Within a week, roughly $1 trillion in market value evaporated. Nvidia dropped 17% in a single day — still the largest single-day market cap wipeout on record. Broadcom fell by a similar margin. Google lost 4%.

The reason was simple and terrifying for Big Tech: DeepSeek trained R1 for approximately $6 million using older Nvidia H800 GPUs that it cobbled together under US export restrictions. The model matched or beat OpenAI and Meta's frontier models on key benchmarks. The entire industry had been operating on a single assumption — more GPUs plus more data equals smarter models — and a small team in Hangzhou had just blown a hole through it.

The Sputnik Moment That Actually Worked

The "Sputnik moment" comparisons came fast. Marc Andreessen called it exactly that. And looking back, the analogy holds — not because DeepSeek proved everyone else wrong, but because it was a kick that accelerated what was already coming.

Within weeks, the industry pivoted. OpenAI declared its GPT-4.5 would be the last non-reasoning model, then later released its own open-weight model, GPT-OSS, to compete directly with DeepSeek and Meta's Llama. Musk's xAI rushed reasoning capabilities into Grok 3. Google positioned Gemini 2.5 Pro as its reasoning flagship. Meta — whose executives reportedly panicked that their upcoming Llama 4 wouldn't keep up — eventually bet everything on building a dedicated "superintelligence" team from scratch.

DeepSeek's key innovation was the mixture of experts technique: instead of one massive model doing everything, smaller specialized models collaborate. It proved you didn't need 100,000 of Nvidia's latest GPUs. You needed better architecture.

The Markets Recovered. Then Soared.

Here's the thing about the trillion-dollar panic: it lasted about a week. The Nasdaq 100 erased its losses within days. And since the DeepSeek shock, Nvidia is up 60%, Broadcom 65%, and Google 75%. OpenAI's valuation swelled to $500 billion by December 2025. Nvidia became the first $5 trillion company by summer.

Satya Nadella invoked the Jevons Paradox — when something gets cheaper, we use more of it, not less. That's exactly what played out. Cheaper AI training didn't kill demand for compute. It supercharged it.

The Chinese AI Race Intensified

One year on, the competitive dynamics DeepSeek unleashed are more visible than ever. In the last week alone, multiple Chinese companies released new models:

  • Moonshot AI unveiled Kimi K2.5 with video generation and agentic capabilities, claiming to outperform leading US models
  • Alibaba launched Qwen3-Max-Thinking, claiming top scores on the "Humanity's Last Exam" benchmark
  • Baidu released Ernie 5.0, pushing its stock to a three-year high
  • Z.ai had to restrict new signups for its GLM 4.7 coding tool after demand overwhelmed its compute capacity

DeepMind CEO Demis Hassabis told CNBC this month that China's AI models may be "just months behind" those developed in the US. The open-source strategy is paying off: Microsoft estimates that DeepSeek usage in Africa is two to four times higher than in other regions, as Chinese open-source models gain adoption in emerging economies.

What About R2?

As for DeepSeek itself — the much-anticipated R2 model still hasn't shipped. Reports cite slow data labelling and chip issues, particularly pressure from Chinese authorities to adopt Huawei's Ascend chips, which suffer from stability problems and inferior software. DeepSeek is reportedly preparing a V4 model instead, and it's possible R2's reasoning capabilities are simply baked into V4.

Meanwhile, both Alibaba and Alibaba-backed Moonshot AI pushed out new models this past week — seemingly timed to get ahead of whatever DeepSeek releases next.

The Real Legacy

DeepSeek didn't kill Big Tech's AI ambitions. If anything, Meta, Microsoft, and Google are spending more than ever — Meta alone plans $115–135 billion in capital expenditure for 2026, nearly double 2025's $72 billion.

What DeepSeek actually did was break the monoculture. It proved that architectural innovation matters as much as brute-force compute. It mainstreamed open-weight models. It forced every major player to ship reasoning capabilities. And it demonstrated that the most interesting AI advances don't always come from the most well-funded labs.

One year on, the AI industry is faster, more competitive, and more architecturally diverse than it was before that one week in January 2025. Whether that qualifies as a Sputnik moment or just a very productive panic depends on your perspective. Either way, nothing has been quite the same since.

intelliBrain

AI-augmented software development. Based in Zürich, working globally.

© 2026 intelliBrain GmbH. All rights reserved.Imprint
BUILT WITH 🧠 + AI