Business
OpenAI To Develop In-House AI Chips, Reducing Nvidia Reliance
![Openai Ai Chip Design Process](https://timesng.com/wp-content/uploads/2025/02/openai-ai-chip-design-process.jpg)
SAN FRANCISCO, Feb 10 (Reuters) – OpenAI is advancing its efforts to lessen dependence on Nvidia for its chip supply by developing its first generation of in-house artificial intelligence (AI) silicon. The ChatGPT maker is finalizing the design of its inaugural in-house chip over the next few months, with plans to send the design for fabrication at Taiwan Semiconductor Manufacturing Co. (TSMC), sources familiar with the situation told Reuters.
The process of sending a first design through a chip factory is known as ‘taping out.’ OpenAI and TSMC declined to provide comments on the matter. This development suggests that OpenAI is on target to achieve its ambitious goal of mass production at TSMC by 2026.
A typical tape-out can cost tens of millions of dollars and may take about six months to produce a finalized chip, unless OpenAI opts for expedited manufacturing at a higher cost. However, there are no guarantees that the silicon will function correctly on the first tape-out, which could necessitate further diagnostics and potential repeated tape-out attempts.
Internally, the training-focused chip is regarded as a strategic asset expected to enhance OpenAI’s leverage in negotiations with other chip suppliers. After the initial chip is completed, OpenAI’s engineers plan to create more advanced processors capable of broader applications with each subsequent iteration.
If the initial tape-out is successful, the company could mass-produce its first in-house AI chip and potentially begin testing it as an alternative to Nvidia’s chips later this year. The speed of OpenAI’s progress on this design is noteworthy, especially as similar processes can take other companies years longer.
While some major tech players like Microsoft and Meta have encountered challenges producing adequate chips despite years of investment, OpenAI’s ambitions come amid concerns over changing market dynamics affecting chip demand for advanced AI models.
The in-house chip is being designed by an in-house team led by Richard Ho, which recently expanded to 40 engineers and is collaborating with Broadcom. Ho previously led Google’s custom AI chip program before joining OpenAI over a year ago. According to industry sources, a new chip design for a large-scale program can potentially cost up to $500 million for a single version, and associated costs may double when factoring software and additional peripheral development.
As generative AI model creators like OpenAI, Google, and Meta continue to push for increased processing power, the demand for chips soars. Meta has announced plans to invest $60 billion in AI infrastructure within the next year, while Microsoft intends to spend $80 billion by 2025. Currently, Nvidia controls approximately 80% of the chip market.
In light of rising costs and the risks associated with reliance on a single supplier, major tech firms including Microsoft, Meta, and now OpenAI are exploring both in-house and alternative chip supply options. OpenAI’s planned chip, designed for both training and deploying AI models, will initially be utilized on a limited scale and primarily tasked with executing AI models.
To match the scope of Google or Amazon’s AI chip programs, OpenAI would likely need to recruit hundreds more engineers. TSMC will fabricate OpenAI’s chip using its cutting-edge 3-nanometer process technology, featuring a commonly utilized systolic array architecture, high-bandwidth memory (HBM) similar to Nvidia’s offerings, and robust networking capabilities.