So, here’s the tea: investors of chip companies like Nvidia (NVDA), AMD, and Micron (MU) were kinda freaking out recently. Why? A Chinese startup called DeepSeek figured out a way to train AI models for way cheaper than U.S. companies. But don’t hit the panic button just yet! Mark Zuckerberg, CEO of Meta, just dropped some reassuring comments that long-term demand for AI chips is still gonna stay strong. Phew, right?
What Happened?
Lately, shares of chip giants like Nvidia, AMD, and Micron took a bit of a nosedive. The reason? DeepSeek, that same Chinese startup, managed to train an AI model at a fraction of the cost compared to U.S. companies. Investors started worrying: if more companies adopt DeepSeek’s methods, demand for GPUs and data center components for AI might drop big time. But turns out, they might’ve been overreacting.
Meta Platforms (you know, Facebook’s parent company) is one of the biggest buyers of AI chips from Nvidia and AMD. On January 29, Mark Zuckerberg said something that made investors breathe a sigh of relief. He explained that while the need for chips to train AI models might decrease, the demand for chips used in inference (when AI responds to user requests) is actually going to skyrocket. Translation? High-performance chips are still gonna be in hot demand.
Who’s DeepSeek, Anyway?
DeepSeek was founded in 2023 by High-Flyer, a Chinese hedge fund that’s been using AI to build trading algorithms for years. Recently, they launched a large language model (LLM) called V3, which performs on par with OpenAI’s GPT-4o. Here’s the kicker: DeepSeek only spent $5.6 million to train V3, while OpenAI has burned through over $20 billion since 2015.
Even crazier? DeepSeek didn’t even use the latest Nvidia GPUs because of U.S. export restrictions. Instead, they worked with older GPUs like the H100 and H800. How’d they pull it off? They got creative with software—using smarter algorithms, more efficient data input methods, and a technique called distillation to speed up training.
Investor Worries
Investors were scared that if other AI companies followed DeepSeek’s playbook, demand for Nvidia and AMD GPUs would tank. This could also hurt Micron’s sales of memory solutions for data centers. But Zuckerberg stepped in to calm nerves. He pointed out that even if training demands go down, inference needs will go up. So, companies will still need those high-performance chips.
Why Nvidia, AMD, and Micron Are Still Winning
Nvidia’s the king of AI GPUs. In their fiscal year 2025 (which just ended), they’re looking at revenue of around $128.6 billion—a whopping 2x increase! AMD’s also stepping up its game, planning to launch the MI350 GPU this year to compete with Nvidia’s latest chips. Meanwhile, Micron—often overlooked but super important—provides the HBM3E memory used in Nvidia’s GPUs.
Zuckerberg’s Reassuring Words
Meta spent a whopping $39.2 billion on chips and data center infrastructure in 2024, and they’re planning to shell out up to $65 billion this year. Zuckerberg emphasized that while training-related chip demand might dip, inference-related demand will rise. Plus, AI adoption among users is still growing fast, meaning data centers will need even more capacity.
So yeah, while DeepSeek’s innovations might shake things up, Zuckerberg’s confident that investors in Nvidia, AMD, and Micron don’t need to stress. These stocks still have a bright future ahead.
DeepSeek’s breakthrough had investors worried about the future of AI chips, but Mark Zuckerberg’s comments suggest that demand for high-performance chips isn’t going anywhere. Whether it’s for training or inference, AI is here to stay—and so is the need for powerful hardware.