This past week has been dominated by online debates comparing American Tech Giants to Chinese AI companies, and to be honest, I am sick of it! If you happened to unplug from social media over the past week, this current drama is due to the release of DeepSeek-R1, a large reasoning model (LRM) trained by Chinese AI company DeepSeek for around $5.5 million as a competitor to OpenAI’s o1 and o3 LRMs (both of which assuredly cost way more than $5.5 million). What started out as open-source researchers being excited over an incredibly detailed research report on how to create LRMs for "cheap" quickly spiraled into a stock market panic, with everyone clutching their pearls over the future of AI. This freak-out (much like the preceding AI hype) is astronomically overblown. Everyone needs to calm down and take a peek behind the curtain.
What is Reasoning
Reasoning, in the context of AI, refers to the ability to engage in multi-step planning to solve problems or achieve goals. While this sounds straightforward, it’s surprisingly difficult to define precisely. For instance, does writing a book require reasoning? What about filling out a calendar? These tasks may seem to involve reasoning, but the line between reasoning and simple task execution can get blurry. As a result, we often end up associating reasoning in AI with more structured domains like mathematics and programming, where clear, logical steps are required. This ambiguity is not just a challenge; it’s also a feature that fuels the hype around reasoning. The less clearly defined it is, the more adaptable it becomes for selling grand visions of AI—and this plays directly into the narrative of AI as the harbinger of AGI. This ambiguity lets companies keep the dream alive, allowing them to claim progress in reasoning while glossing over the very real limitations of their systems. It’s a key ingredient for moving beyond narrow AI systems toward Artificial General Intelligence (AGI).
Why You Should Be Mad
I don’t expect the casual reader to understand how AI works any more than I expect the average person to understand the circuitry in their cellphone. But as someone entrenched in AI research, I feel it is my responsibility to cut through the noise and tell you what’s really going on in AI today. The current mouthpieces for AI have shown how little they know about the technology itself. The number of self-proclaimed prophets preaching the wonders (or horrors) of the coming AI future sounds more like the Crypto Bros of the pandemic than the grounded scientists and researchers I’ve met in the field.
Here’s the first thing that pisses me off: No, Artificial General Intelligence (AGI) is not just around the corner, despite what these so-called prophets (including Mark Zuckerberg) say. We are not about to replace all entry-level and mid-level software engineers with AI. Modern AI is fascinating, but these prophets have two motives: One, to attract venture capital for their AI startups. Two, in Zuckerberg’s case, to justify mass layoffs of American workers while quietly offshoring much of Meta’s work to India or Mexico.
And here’s where I should infuriate you: People like me have let these prophets lie to you because we’ve benefited from it. We’ve extracted money from them to fund our research and line our pockets. While we can argue that this money comes from millionaires and billionaires, the truth is that AI scientists have let greed and ambition get in the way of good science and moral integrity. This hubris set the stage for this week’s events.
The DeepSeek-R1 Hype
DeepSeek deserves credit for what they’ve achieved. I’ve been following their work since DeepSeek-MoE, and they consistently produce awesome research that has put themselves at the forefront of the Transformer architecture. And DeepSeek-R1 is a truly impressive accomplishment—it has dramatically reduced the cost of training LRMs and given a much-needed boost to open-source research. However, this isn’t the OpenAI killer everyone is fearing it is. And to be honest, it's not even close.
We have to keep in mind that LRMs are not the end goal for companies like Open AI and Meta. These companies are myopically chasing AGI, and they’ll continue to throw obscene amounts of money at the problem until they achieve their goals. And once AGI is developed, it will only need to be trained once, so who cares about the cost? These massive capital expenditures aren’t a flaw in the system; they’re the system’s design. They let researchers focus on pushing boundaries rather than optimizing costs.
So let’s call the AI bubble what it is: hype inflated by exaggeration and hubris. DeepSeek didn’t kill OpenAI, and they didn’t shatter the AI industry. What they did, perhaps unintentionally, is expose how little AI companies have done to educate the public about what AI can actually do. The field has made promises our "lightning-filled rocks" can’t yet keep. To keep the charade going, some have resorted to dismissing DeepSeek’s work as Chinese propaganda. It’s not. DeepSeek has released genuinely great research over the past two years, and while this isn’t a "splitting the atom" level discovery, it’s worth celebrating.
Smoke and Mirrors
And now we come to the uncomfortable truth that everyone who works in AI has been hiding from you: The kind of AI that Twitter and the AGI prophets love to talk about isn’t ready for market. It’s a smoke-and-mirrors show, buying time until the substance arrives. This doesn’t mean all AI is bullshit. It just means that we don't have some kind of human-in-a-can robot that never eats, sleeps, or gets depressed. What we have right now are highly targeted and focused AI solutions that help **people** navigate the ever expanding sea of data we have terraformed for ourselves. But the gap between practical AI and the sci-fi promises of AGI is massive.
As a researcher, I can’t ignore the damage done by overhyping AI. It has created unrealistic expectations, fostered mistrust, and buried genuinely valuable work under a mountain of noise. DeepSeek-R1 should remind us that AI’s progress is incremental, not revolutionary. And that’s okay. Science thrives on slow, steady improvement—not flashy announcements and billion-dollar valuations.
The Bottom Line
AI’s day will come. I truly believe it will change our lives in profound ways. But no, the sky is not falling, and AGI is not just around the corner. Instead of buying into the hype, let’s celebrate genuine achievements like DeepSeek-R1 for what they are: steps forward in a long journey. And let’s demand better from the prophets, companies, and researchers like myself steering the AI narrative. We all owe the world honesty, not just spectacle.
These are my feelings after being asked by everyone I know about DeepSeek-R1 and I understand that they come across quite raw and unprofessional. As such I might delete or edit this later so I can actually get hired in the future, but for now I hope you found it valuable to read.
You are spot on about Meta (and Amazon et al). It is a massive outsourcing move. Nice post!
Great post Nathan! I totally feel the frustration too, here are my thoughts on yours:
1. The “benefiting from the AI hype” aspect is a double-edged sword. On one hand, it helps professionals like us land well-compensated jobs or lucrative service contracts. On the other, it places us in the difficult position of having to educate and manage expectations for non-technical decision-makers—both internally and externally. More often than not, this makes us sound like the black clouds of AI.
2. It’s not just reasoning that has been vaguely defined—the very concept of "intelligence" (with all the "artificial" flavors you want to add to it) remains scientifically immeasurable. That said, I belong to the pragmatic emergentist camp that believes this technology will force us to define intelligence in ways we don’t yet fully understand. In many ways, I think AI will end up teaching us more about ourselves and our own brains than we expect.
3. Shouldn’t we just call this era (2022-present) the age of “language-based automation (LBA)” rather than simply “AI”? It seems like a much more accurate representation of what’s happening. The problem is that "an LBA algorithm trying to take over the world" doesn't sound as fun in movie scripts and clickbait articles.
4. Given all the hype cycles and misinformation, I’ve purposefully chosen to ignore most of the DeepSeek drama of the past few days. I’d rather wait for the dust to settle—by then, the media (both social and mainstream) will have likely moved on to their next obsession or controversy of the month and the implications will be much clearer.
Keep fighting the good fight!