Best Selling Products
$10 billion for AI chips: OpenAI bets big to break Nvidia's monopoly
Nội dung
According to the Financial Times, OpenAI is expected to mass produce its first AI chip in 2026, in cooperation with Broadcom, one of the world's leading semiconductor corporations.

In recent years, artificial intelligence has become the hottest keyword in the technology industry. The explosion of large language models such as ChatGPT, Gemini or Claude has changed the way people work, learn and create. Behind the magic of AI is the enormous computing power provided by specialized chips. These chips are the real “heart” of the AI era.
Nvidia is currently the dominant name in the field of AI chips with the A100 and H100 GPU lines. Without them, AI models would be difficult to train and operate on a global scale. However, that overwhelming position is facing a major challenge when OpenAI announced plans to develop its own AI chips. This move is expected to shake up the entire semiconductor industry and open a new war in the technology village.
According to the Financial Times, OpenAI plans to mass produce its first AI chip in 2026, in partnership with Broadcom, one of the world’s leading semiconductor corporations. This is not only a strategic decision in terms of cost but also a move to assert its leading position and escape dependence on Nvidia. The AI chip war has officially entered a fierce phase from here.
1. OpenAI and the $10 billion order related to AI chips
For months, rumors swirled that OpenAI was working on its own chip project. But it wasn’t until September 4 that the tech world got concrete evidence, when Broadcom announced a $10 billion order from an unnamed customer. Speculation immediately turned to OpenAI.
The $10 billion figure is more than just a big commercial deal; it reflects OpenAI’s determination to change the game. With this huge amount of money, OpenAI can build a complete AI chip production line, from design, testing to mass production. Partnering with Broadcom, a company with decades of experience in manufacturing networking and system chips, will help OpenAI significantly shorten the time to realize its plan.
If analysts’ predictions are correct, this would be one of the largest chip orders in the history of the semiconductor industry, showing that OpenAI is ready to enter an extremely harsh field. Because chip manufacturing requires not only capital but also deep knowledge of microarchitecture, semiconductor technology and global supply chains.
2. Three motivations for OpenAI to make its own chips
The decision to make its own chips was not a matter of pure ambition. It was the result of a series of pressures and practical needs.
First is cost. Training AI models like GPT-4 or GPT-5 requires a huge amount of Nvidia GPUs. Each H100 GPU costs around $25,000 to $40,000, and a complete training system can require tens of thousands of them. This brings OpenAI’s operating costs to billions of dollars per year. With custom chips, OpenAI can design optimal hardware architectures for its models, reducing power consumption and increasing performance. When optimized for individual needs, computational costs are significantly reduced, providing a big advantage in the long run.
Second is supply. After ChatGPT was released, demand for Nvidia GPUs skyrocketed globally. Startups, tech companies, and even governments rushed to buy GPUs for AI training. The severe shortages forced many companies to wait months for their shipments. Even giants like Microsoft and Meta had to scramble for each batch of GPUs. For OpenAI, these delays were not only economically damaging, but also directly affecting its ability to innovate. Having its own chips meant it no longer had to rely entirely on Nvidia’s supply schedule.
The third and most important driver is strategic independence. Nvidia now controls not only the hardware but also the CUDA software ecosystem. This puts OpenAI in a passive position, as all upgrade, pricing, and distribution decisions are controlled by Nvidia. Designing its own chips is the only way for OpenAI to regain control, controlling the entire value chain from software to hardware. This is similar to how Apple has succeeded by designing its own A-series and M-series chips, instead of relying on Intel or Qualcomm.
3. Lessons from Google and Amazon
OpenAI isn’t the first company to develop its own AI chip. Google was one step ahead with its TPU (Tensor Processing Unit) line, introduced in 2016. TPUs are specifically designed for machine learning tasks, especially neural networks, and have become a mainstay in Google Cloud. TPUs help Google achieve much higher performance than traditional GPUs for some tasks, while also reducing the huge operating costs for services like YouTube and Google Search.
Amazon has also entered the game with two lines of custom AI chips: Inferentia and Trainium. Inferentia focuses on the inference stage, while Trainium is for model training. Developing its own chips allows Amazon Web Services to offer AI cloud services at competitive prices while maintaining more control over the technology.
These cases make it clear that to maintain leadership in AI, companies cannot rely solely on Nvidia. Developing their own chips has become a natural trend, almost like an unofficial “standard” in the industry. For OpenAI, the decision to embark on this path is a logical continuation, and an affirmation that they want to stand on par with other tech giants.
4. AI chip war: The fierceness is not only in technology
The semiconductor industry is tough, and AI chips are even tougher. Nvidia now accounts for more than 80% of the global AI chip market, with tens of billions of dollars in revenue in just one quarter. Nvidia’s strength comes not only from its hardware but also from its CUDA ecosystem, a programming toolkit that has become the standard in AI research.
But this dominance also has its downsides. Nvidia’s high chip prices and limited supply have made many companies uncomfortable, which is where OpenAI and other competitors can step in. If OpenAI succeeds with its custom AI chips, it could significantly reduce costs while creating its own software ecosystem that competes directly with CUDA.
The AI chip war is about more than technology. It’s also about geopolitics and global supply chains. The US, China, and many other countries are all competing fiercely in the semiconductor space. The ban on US chip exports to China shows the strategic importance of the sector. OpenAI’s ability to make its own chips also helps the company minimize risks from international political and trade factors.
5. OpenAI's Ambitions and Future
The decision to invest in AI chips is not just a short-term strategic decision, but also reflects OpenAI’s long-term ambitions. The company wants to be more than just an AI software developer, it wants to control the entire technology infrastructure. With custom chips, OpenAI can create the optimal combination of software and hardware, paving the way for generations of AI models that are more powerful, more efficient, and cheaper.
In the future, we can envision a closed OpenAI ecosystem where the chips, models, and services are all controlled by the company itself. This is similar to how Apple creates tight coupling between the iPhone, iOS, and A-series chips, creating a distinct advantage over the competition.
But the road is fraught with challenges. Semiconductor manufacturing requires tens of billions of dollars in investment, cutting-edge technology, and an extremely complex supply chain. Even large corporations like Intel and Samsung have repeatedly encountered difficulties in developing new chips. For a young company like OpenAI, this will be a huge challenge.
Still, with the backing of Broadcom and the financial muscle of Microsoft and other major investors, OpenAI has the wherewithal to pursue this ambition. More importantly, they understand that if they don’t act now, their leadership in the AI industry could quickly be shaken.
6. Conclusion
The AI chip wars have officially entered a new phase as OpenAI announced plans to produce its own chips. This is not only a strategic move to cut costs and ensure supply, but also an affirmation of long-term ambitions.
If successful, OpenAI will not only compete head-to-head with Nvidia, but also lay the foundation for a comprehensive AI ecosystem where software and hardware are tightly intertwined. The race will not be easy, but it will certainly reshape the balance of power in the global technology industry.
In the next few years, when OpenAI’s first chips are released, the world will witness a fierce competition between Nvidia, Google, Amazon, and OpenAI. And no matter the outcome, the ultimate beneficiary will be humanity, as artificial intelligence becomes more powerful, cheaper, and more ubiquitous than ever.