A quietly emerging perspective in Silicon Valley may have significant consequences: the advancements stemming from large AI models—those anticipated to achieve human-level artificial intelligence in the near future—might be experiencing a deceleration.
Since the rapid introduction of ChatGPT two years ago, proponents of AI have asserted that enhancements in generative AI would increase at an exponential rate, as major technology companies continued to contribute resources in the form of data for training and computational power.
The underlying belief was that fulfilling the technology’s potential was merely a question of resources—by investing sufficient computing capacity and data, artificial general intelligence (AGI) would materialize, capable of equaling or surpassing human-level capabilities.
The pace of progress in artificial intelligence has been so swift that prominent figures in the industry, including Elon Musk, have advocated for a temporary halt on AI research.
Nevertheless, major technology firms, including Musk’s own company, have continued to advance, investing tens of billions of dollars to maintain their competitive edge.
OpenAI, the organization behind ChatGPT and supported by Microsoft, has recently secured $6.6 billion to further its developments.
Meanwhile, Musk’s AI venture, xAI, is reportedly in the process of raising $6 billion to acquire 100,000 Nvidia chips, which are essential components for powering advanced AI models, as stated by CNBC.
However, challenges are emerging on the path to achieving artificial general intelligence (AGI).
Industry experts are starting to recognize that large language models (LLMs) are not indefinitely scalable, even with increased power and data input.
Despite substantial financial investments, enhancements in performance are beginning to show signs of stagnation.
“Exorbitant valuations of companies such as OpenAI and Microsoft are primarily predicated on the belief that LLMs will evolve into artificial general intelligence through continued scaling,” remarked AI specialist and critic Gary Marcus. “As I have consistently cautioned, that notion is merely a fantasy.”
One significant obstacle is the limited availability of language-based data for training AI systems.
Scott Stevenson, CEO of the AI legal tasks firm Spellbook, which collaborates with OpenAI and other providers, asserts that depending solely on language data for scaling is bound to encounter limitations.
“Some laboratories have been overly focused on simply increasing the volume of language input, mistakenly believing it would lead to greater intelligence,” Stevenson noted.
Sasha Luccioni, a researcher and AI lead at the startup Hugging Face, contends that the slowdown in progress was foreseeable, given the companies’ emphasis on size rather than the purpose of model development.
“The quest for AGI has always been unrealistic, and the ‘bigger is better’ mentality in AI was destined to reach a limit eventually — and I believe this is what we are witnessing now,” she stated to AFP.
The AI sector challenges these interpretations, asserting that the journey towards achieving human-level AI remains uncertain.
“There is no barrier,” stated OpenAI CEO Sam Altman in a post on X on Thursday, without providing further details.
Dario Amodei, CEO of Anthropic, which collaborates with Amazon to develop the Claude chatbot, expresses optimism: “If you observe the pace at which these capabilities are advancing, it suggests that we could reach that milestone by 2026 or 2027.”
- Time for contemplation –
However, OpenAI has postponed the launch of the highly anticipated successor to GPT-4, the model that underpins ChatGPT, due to its performance falling short of expectations, as reported by sources cited by The Information.
Currently, the organization is concentrating on optimizing the use of its existing capabilities.
This strategic pivot is evident in their recent o1 model, which aims to deliver more precise responses through enhanced reasoning rather than relying solely on increased training data.
Stevenson remarked that OpenAI’s transition towards instructing its model to “allocate more time to thinking instead of merely responding” has resulted in “significant advancements.”
He compared the emergence of AI to the discovery of fire, suggesting that instead of merely adding more data and computational power, it is essential to leverage this innovation for targeted applications.
Stanford University professor Walter De Brouwer compares advanced large language models to students progressing from high school to university: “The AI infant was a chatbot that engaged in a lot of improvisation and was often error-prone,” he observed.
“The approach of homo sapiens, which involves careful consideration before action, is on the horizon,” he concluded.