Published 14:16 IST, October 30th 2024
OpenAI working with Broadcom, TSMC to build its first in-house AI chip
OpenAI is working with Broadcom and TSMC to build its first in-house chip designed to support its artificial intelligence systems, a new report says.
Advertisement
OpenAI is working with Brocom and TSMC to build its first in-house chip designed to support its artificial intelligence systems, while ding AMD chips alongside Nvidia chips to meet its surging infrastructure demands, sources told Reuters.
OpenAI, fast-growing company behind ChatGPT, has examined a range of options to diversify chip supply and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as "foundries" for chip manufacturing.
Advertisement
company has dropped ambitious foundry plans for now due to costs and time needed to build a network, and plans inste to focus on in-house chip design efforts, according to sources, who requested anonymity as y were not authorized to discuss private matters.
company's strategy, detailed here for first time, highlights how Silicon Valley startup is leveraging industry partnerships and a mix of internal and external approaches to secure chip supply and manage costs like larger rivals Amazon , Meta, Google and Microsoft. As one of largest buyers of chips, OpenAI's decision to source from a diverse array of chipmakers while developing its customized chip could have broer tech sector implications.
Advertisement
Brocom stock jumped following report, finishing Tuesday's tring up over 4.5%. AMD shares also extended ir gains from morning session, ending day up 3.7%.
OpenAI, AMD and TSMC declined to comment. Brocom did not immediately respond to a request for comment.
Advertisement
OpenAI, which helped commercialize generative AI that produces human-like responses to queries, relies on substantial computing power to train and run its systems. As one of largest purchasers of Nvidia’s graphics processing units (GPUs), OpenAI uses AI chips both to train models where AI learns from data and for inference, applying AI to make predictions or decisions based on new information. Reuters previously reported on OpenAI's chip design endeavors. Information reported on talks with Brocom and ors.
OpenAI has been working for months with Brocom to build its first AI chip focusing on inference, according to sources. Demand right now is greater for training chips, but analysts have predicted need for inference chips could surpass m as more AI applications are deployed.
Advertisement
Brocom helps companies including Alphabet unit Google fine-tune chip designs for manufacturing and also supplies parts of design that help move information on and off chips quickly. This is important in AI systems where tens of thousands of chips are strung toger to work in tandem.
OpenAI is still determining wher to develop or acquire or elements for its chip design, and may engage ditional partners, said two of sources.
company has assembled a chip team of about 20 people, led by top engineers who have previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho.
Sources said that through Brocom, OpenAI has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company to make its first custom-designed chip in 2026. y said timeline could change.
Currently, Nvidia’s GPUs hold over 80% market share. But shortages and rising costs have led major customers like Microsoft, Meta, and now OpenAI, to explore in-house or external alternatives.
OpenAI’s planned use of AMD chips through Microsoft's Azure, first reported here, shows how AMD's new MI300X chips are trying to gain a slice of market dominated by Nvidia. AMD has projected $4.5 billion in 2024 AI chip sales, following chip's launch in fourth quarter of 2023. Training AI models and operating services like ChatGPT are expensive. OpenAI has projected a $5 billion loss this year on $3.7 billion in revenue, according to sources. Compute costs, or expenses for hardware, electricity and cloud services needed to process large datasets and develop models, are company's largest expense, prompting efforts to optimize utilization and diversify suppliers.
OpenAI has been cautious about poaching talent from Nvidia because it wants to maintain a good rapport with chip maker it remains committed to working with, especially for accessing its new generation of Blackwell chips, sources ded.
Nvidia declined to comment.
14:16 IST, October 30th 2024