AGI is on the radar but far from reality, says Gartner • The Register

AGI is on the radar but far from reality, says Gartner • The Register

Gartner warns that the development of artificial general intelligence (AGI) is at least ten years away and it may not be certain that it will ever become a reality. It may not even be a worthwhile endeavor, says the analyst.

AGI has become a controversial topic in recent years, with developers of large language models (LLMs) such as OpenAI making bold claims that they have created a near-term path to human-like intelligence. At the same time, others from the discipline of cognitive science have scorned the idea, arguing that the concept of AGI is poorly understood and the LLM approach is inadequate.

Gartner says that in its Hype Cycle for Emerging Technologies 2024, it distills “key insights” from more than 2,000 technologies and uses its framework to create a concise list of the most important, must-know emerging technologies that have the potential to deliver benefits over the next two to 10 years.

The consulting firm notes that GenAI – the subject of enormous industry hype and billions of dollars in investment – ​​is about to reach the dreaded “trough of disappointment.” Arun Chandrasekaran, distinguished VP analyst at Gartner, said The Register:

“The expectations and hype around GenAI are enormously high. So it’s not that the technology itself is bad, but it cannot meet the high expectations of companies due to the enormous hype that has been created in the market over the last 12 to 18 months.”

In the long term, however, GenAI is likely to have a significant impact on investments, Chandrasekaran said. “I remain convinced that GenAI will have a significant impact in the long term, but we may have overestimated in some ways what it can do in the short term.”

And as for the short-term outlook? There will inevitably be some twists and turns and bumps along the way. AI expert Gary Marcus wrote an article earlier this month claiming that the “collapse of the generative AI bubble in a financial sense could be imminent.”

“Certainly generative AI itself is not going to disappear. But it could well be that investors stop spending as much money as they have been, enthusiasm could wane, and many people could lose their shirts. Companies currently worth billions could be sold or broken up into pieces.”

This is based on his view that there is “no robust solution to hallucinations”, that companies are implementing this solution “modestly and permanently” and making “modest profits”.

Previous Gartner research suggests that Office AI will take two years to pay off in terms of general adoption. In March, Microsoft was still trying to convince its customers of the productivity benefits.

Also included in Gartner’s hype cycle for new technologies is AGI, which the consultancy says is rising at the “peak of inflated expectations” and could have an impact more than ten years from now.

Chandrasekaran told us that this isn’t the first time AGI has come up in the hype cycle. “Users were asking for it, so we needed an opinion. We’re not going to get to AGI anytime soon. That’s not what we’re seeing here at all. All we’re seeing essentially is that AGI is a goal for a lot of these AI research labs, but it’s going to take a tremendous amount of effort.”

It remains unclear whether the LLM research labs are taking the right approach. “There is a belief that as the models get bigger and bigger, we’ll eventually have to get to AGI, but I don’t think that’s going to be the case,” Chandrasekaran said. “We need to think about how we bring some of these concepts, like thinking, into the models. We also need to get the models to learn about the world the way humans learn about the world, which is through our senses.”

He argued that there is no clear consensus within the research community on whether AGI is a worthwhile goal. “Even the timeline for achieving that goal, or even what AGI means, is uncertain. I believe that machines are good at certain things and humans are good at certain things, and I don’t know whether trying to create a machine that thinks and acts like a human might be the most desirable or optimal goal.” ®

Leave a Reply

Your email address will not be published. Required fields are marked *