Andrew Ng Joins Ongoing Row on AGI, Calls it ‘Limited’
Ng believes that AGI is a distant possibility, unlike other leading voices in technology who feel it will emerge in the coming years.
Topics
News
- Bahrain Shows High Digital Maturity in World Bank GovTech Index
- Sam Altman Calls for a Head of Preparedness at OpenAI
- Andrew Ng Joins Ongoing Row on AGI, Calls it ‘Limited’
- ServiceNow to Acquire Cybersecurity Startup Armis for $7.75B
- UAE Introduces Federal Law to Regulate Child Safety Across Digital Platforms
- Middle East Has a Competitive Advantage as a Data Center Powerhouse, Says BCG
[Image source: Pankaj Kirdatt /MITSMR Middle East]
On the sidelines of the AI Developers Conference held last month in New York, British-American computer scientist Andrew Ng shared his views on what artificial general intelligence (AGI) is and how it can be achieved. “The tricky thing about AI is that it is amazing and it is also highly limited,” he told NBC News, adding that “understanding that balance of how amazing and how limited it is—that’s difficult.”
While largely bullish about AI’s impending trajectory, he expressed doubts that the technology would broadly displace the human workforce and believed that AGI is a distant possibility, contrary to other top voices in the technology sector, who think it will take shape in the coming years.
“I look at how complex the training recipes are and how manual AI training and development is today, and there’s no way this is going to take us all the way to AGI just by itself,” Ng said.
The conversation around AGI is not new. During a recent episode of Scientific Controversies, LeCun noted that today’s advanced models still lack a genuine understanding of how the world works, making the possibility of AI thinking like humans a distant prospect for the future.
He further explained that AI tasks primarily relied on symbol manipulation, which has models search through different combinations of symbols to find the correct output.
Hence, the language models performed much better in organised environments. “We think of ourselves as being general, but it’s simply an illusion because all of the problems that we can apprehend are the ones that we can think of,” LeCun said.
The former Meta AI leader explained that no aspect of human intelligence can be considered general; rather, it is ‘super-specialized.’
Human intelligence is shaped by its unique characteristic of efficiently handling the physical world and social interaction, he said.
Meanwhile, fellow industry leader Google DeepMind CEO and Nobel Laureate Demis Hassabis publicly disagreed with LeCun’s views, calling him “plain incorrect.”
Hassabis said that LeCun was getting confused between general and universal intelligence.
“Obviously, one can’t circumvent the no free lunch theorem, so in a practical and finite system, there always has to be some degree of specialisation around the target distribution that is being learnt…But the point about generality is that in theory, in the Turing Machine sense, the architecture of such a general system is capable of learning anything computable given enough time and memory (and data), and the human brain (and Al foundation models) are approximate Turing Machines,” the DeepMind head said, pointing out that brains were indeed “extremely general”.
Sharing his two cents, Tesla and SpaceX leader Elon Musk sided with Hassabis, writing simply, “Demis is right”.
LeCun later responded to Hassabis’ criticism, calling the disagreement largely semantic. “I object to the use of ‘general’ to designate ‘human level’ because humans are extremely specialized,” he replied.
“You may disagree that the human mind is specialized, but it really is. It’s not just a question of theoretical power but also a question of practical efficiency. Clearly, a properly trained human brain with an infinite supply of pens and paper is Turing complete. But for the vast majority of computational problems, it’s horribly inefficient, which makes it highly suboptimal under bounded resources (like playing a chess game),” he added.



