AI bros are at war over declarations that Google's upcoming Gemini AI model smashes OpenAI's GPT-4

Advertisement
AI bros are at war over declarations that Google's upcoming Gemini AI model smashes OpenAI's GPT-4
Sam Altman is CEO of ChatGPT maker OpenAIIssei Kato/Reuters
  • The researchers behind the SemiAnalysis blog say Google's upcoming AI Gemini smashes GPT-4.
  • They argue, essentially, that Google will win AI because it has access to the most top-flight chips.
Advertisement

It's never fun for a CEO to hear their product might get thrashed by a competitor.

That might explain OpenAI boss Sam Altman's defensive response to a post published over the weekend titled: "Google Gemini Eats The World – Gemini Smashes GPT-4 By 5X, The GPU-Poors."

The post, by Dylan Patel and Daniel Nishball of research firm SemiAnalysis, argues that Google's anticipated Gemini AI model looks ready to blow OpenAI's AI model out the water by packing in a heck of a lot more computing power.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Much of the analysis, which Patel said was based on data crunched from a Google supplier, boils down to Google having access to infinitely more top-flight chips and its model outdoing GPT-4 on a performance measure relating to computer calculations known as FLOPS.

Gemini is a next-gen, multimodal AI model being worked on by researchers at Google's AI arm DeepMind, and is expected to be released later in 2023. It's the search giant's most serious effort yet at giving OpenAI's GPT-4 a run for its money.

Advertisement

"Incredible google got that semianalysis guy to publish their internal marketing/recruiting chart lol," Altman wrote in an X post on Monday, insinuating that material on Gemini's performance from inside Google, would, naturally, be flattering.

In response, SemiAnalysis' Patel posted on X that he got data on Google's GPU stores from a supplier of Google — rather than Google itself. He threw in an edited image of Google boss Sundar Pichai force-feeding Altman milk while holding him by the hair.

While the two trade barbs, the biggest internet forums following developments in AI have been busy dissecting the article. At the heart of the debate: Does having more advanced chips — needed to train AI models at scale — mean you end up with better models? The analysts described this as being "GPU rich" versus "GPU poor".

'"I hope someone dethrones OAI, soon. But to say Gemini Smashes GPT-4 by 5x makes it sound like it is 5x better than GPT-4, it's not, its 5x compute. Not necessarily correlated to quality," said an X user called @Teknium1.

"Computational power alone is not the only resource. It is also the training process itself … and, obviously, data and its quality," a Hacker News user wrote. "I will be convinced only after Google demonstrates that Gemini is better than GPT4 (in some, or all, tasks)."

Advertisement

Though OpenAI's ChatGPT has had a headstart in terms of consumer awareness, other serious players have entered the LLM arena such as Meta's Llama 2. With Gemini due later in the year, 2023 may shape up to be more consequential for progress in large language models than 2022, the year of ChatGPT's mass adoption.

"Whether you agree with Patel or not, this is a bold and very falsifiable prediction," wrote one r/MachineLearning user. "2024 will be interesting."

{{}}