scorecardMassive LLMs like Google's forthcoming Gemini could be a rare breed as generative AI enters a downsizing period
  1. Home
  2. tech
  3. news
  4. Massive LLMs like Google's forthcoming Gemini could be a rare breed as generative AI enters a downsizing period

Massive LLMs like Google's forthcoming Gemini could be a rare breed as generative AI enters a downsizing period

Hasan Chowdhury   

Massive LLMs like Google's forthcoming Gemini could be a rare breed as generative AI enters a downsizing period
Tech2 min read
  • Google is preparing to release a massive LLM called Gemini.
  • But an LLM of its size could become a rare sight.

There's a ton of suspense heading into the fall as Silicon Valley awaits the arrival of a colossal new AI model from Google, which aims to rival the huge model behind OpenAI's ChatGPT.

It's a release that may become an exceedingly rare occurrence since the AI sector is gearing up for a significant downsizing period.

To date, the generative AI boom has been driven by algorithms known as large language models (LLMs). They're described as "large" because they're built to process uncanny volumes of data from the internet. It's what makes the responses of apps like ChatGPT feel so human.

For a sense of how big these models are, consider OpenAI's GPT model. GPT-4, the latest AI model from the ChatGPT creator, is thought to be trained on more than a trillion bits of data known as tokens.

To compete with GPT-4, Google's upcoming Gemini model — which is nearing release as a small group of companies begin to test it out, The Information reported — could be trained with a scale of data that goes way beyond that.

Though going bigger and bigger has looked like an unlikely path forward for some time, with OpenAI CEO Sam Altman suggesting earlier this year that "we're at the end of the era where it's going to be these, like, giant, giant models," it's becoming increasingly clear why.

First, building massive models is an expensive business. When making his comments, Altman suggested the cost of training GPT-4 was above $100 million, WIRED reported.

Second, these models have been plagued with issues such as biases, factual errors, and hallucinations. These have made the models a point of regulatory concern for lawmakers who worry about the destabilizing effects they may have on the web as a source of accurate information.

Thirdly, companies seeking to tap the benefits of generative AI may harbor concerns about how well-protected their sensitive data might be if it's fed into a model that is processing data from everywhere else. It's why several companies have issued bans on ChatGPT usage.

For Dr Ebtesam Almazrouei, acting chief researcher and executive director at the Technology Innovation Institute's AI Cross Center Unit, the focus in the future won't necessarily be about the quantity of data processed by an AI model. "What matters is quality," she told Insider.

Though Almazrouei's Abu Dhabi-based institute released a new LLM known as Falcon 180B this month, one that is 2.5 times the size of Meta's Llama 2, she recognized the importance of doing a "fine tune" of LLMs to meet customer needs for something more "specialized."

"Specialized LLMs is where we are going if we are concerned about gaining the benefit that we are aiming for from these large language models in different domains and industries," she said.

Generalized, massive LLMs like GPT-4 and Gemini will likely continue to have their place, but we can still expect a significant downsizing to begin as companies demand AI that's tailor-made for them.




Advertisement