In order to offer generative artificial intelligence models, companies must use very expensive specialized chips (a category in which Nvidia is now the leader). But to optimize these costs, certain groups have developed alternatives that they partly use in their data centers. Google Cloud, for example, already uses chips called Cloud Tensor Processing Units that it developed, in combination with Nvidia chips. In 2023, Microsoft formalized its Maia 100 chip, which it can use on some of its Azure servers. As for Meta, it also developed the MTIA chips with the same objective.

OpenAI would follow suit

And obviously, OpenAI, the company that created ChatGPT, should also use a similar strategy, in order to diversify its sources. In a recently published article, Reuters indicates that OpenAI has abandoned an initial plan to create its own chip foundry. However, it would still develop a chip designed in-house that it could produce, later with the American company and the Taiwanese manufacturer TSMC. Production of these chips would be scheduled for 2026, and it would be possible that OpenAI would work with other players on this project.

Objective: reduce AI costs?

The ChatGPT creator’s goal would be to have data centers that rely on different chip vendors. Indeed, while continuing to use Nvidia chips, it would also like to add AMD chips, as well as its internally designed chips. Currently, the team developing OpenAI’s chips is reportedly made up of 20 people and includes former Google employees who worked on its Cloud Tensor Processing Units chips. But, of course, for the moment, this information should be considered with caution. Nonetheless, it makes sense that OpenAI would seek to reduce the costs of training and operating its AI models. While the company is still not profitable, the cost of computing power and energy would constitute the bulk of its expenses today.

  • Chatbots like ChatGPT or Gemini use specialized AI chips that are very expensive to operate
  • To reduce these costs, some groups have decided to develop their own chips instead of relying exclusively on chips from Nvidia (which is the leader in this market)
  • And OpenAI would consider following suit, relying on a partnership with Broadcom and TSMC
  • OpenAI’s AI chips could arrive in 2026

Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *