1. Home
  2. tech
  3. news
  4. Sam Altman wants to solve the great AI chip shortage himself

Sam Altman wants to solve the great AI chip shortage himself

Hasan Chowdhury   

Sam Altman wants to solve the great AI chip shortage himself
  • Demand for AI chips is heavily outstripping supply right now.
  • Sam Altman doesn't want OpenAI to be slowed down by a shortage of compute power.

Sam Altman doesn't want OpenAI to be derailed by something as simple as a lack of microchips.

But shortages of such chips, integral to the development of AI, have caused him and many other execs at tech companies seeking to emulate OpenAI's success enormous headaches.

Since the launch of ChatGPT, demand for graphics processing units (GPUs) and other high-end chips has soared, with existing supply lines unable to keep up.

Altman went as far as calling the crunch a "brutal" one in November, while Microsoft even identified the shortages as a potential risk factor for investors to consider.

Currently, these chips are sourced from a select few companies that control the market for AI processors. Nvidia wields the most power thanks to demand for its H100 graphics processing units (GPUs), a much-sought-after piece of kit that can cost over $40,000 each.

But apparently, Altman is done relying solely on them: he's ready to start making his own.

That's according to a report from Bloomberg, which stated that Altman had been busy pitching heavyweight investors to back a new AI chip venture that would give his company a lot more control over its chip supply.

Building a new supply line

Typically, companies like Nvidia really only design chips before outsourcing their manufacture to specialists who operate expensive and complex fabrication plants, like Taiwan's TSMC and Intel.

But a bunch of tech companies have started designing their own. In November, for instance, Microsoft unveiled its new Azure Maia AI chip, designed with large language model (LLM) training in mind. Google, meanwhile, unveiled a new chip design last year for a similar purpose.

"I think the magic of capitalism is doing its thing here," Altman told the FT in an interview in November when asked about efforts to rival Nvidia's chips. "A lot of people would like to be Nvidia now."

For Altman, however, it seems emulating Nvidia isn't the only goal.

Though the OpenAI CEO had been weighing his own designs, per a report in October, the Bloomberg report suggested Altman's plans would involve not only designing chips but setting up an entire network of fabrication plants to manufacture them too.

Altman would also work with "top chip manufacturers" as part of the plan. How they would work together remains unclear but it was reported the network would be "global in scope."

The plan represents a do-it-yourself approach that would set his company apart from the rest by making it vertically integrated.

That would mean OpenAI would exercise full control over the chip-making process while rivals continue to chase supply from third parties or outsource manufacturing of their own designs to others.

It's an incredibly costly plan that's by no means guaranteed, as Bloomberg noted the talks are still early.

Hurdles for Altman would include securing the billions of dollars and expertise needed to build state-of-the-art fabrication facilities, as well as technical challenges in coming up with designs that can rival Nvidia's H100 units.

Despite this, Altman's plan has already won fans.

Adam Niewinski, managing partner at OTB Ventures, told Business Insider that Altman's potential investment in chip technology was a dual-purpose move.

"It's essential for the evolution of AI and represents a potentially lucrative investment opportunity given the massive anticipated demand in this sector'" he said.

Stockpiling the strategy for now

While Altman's grand plan may address shortages in 10 years time and beyond, for now many execs are scrambling to get as many chips as they can get their hands on.

Mark Zuckerberg proved as much in a video last week, in which he outlined Meta's plans to secure around 350,000 Nvidia H100s, or "600,000 H100 equivalents of compute," by the end of the year in order to build "an absolutely massive amount of infrastructure" that supports his long-term goal of building open source AGI.

Post by @zuck
View on Threads

"It's become clearer that the next generation of services requires building full general intelligence," he said. "Building the best AI assistants, AIs for creators, AIs for businesses and more – that needs advances in every area of AI."

Popular Right Now