Meta’s AI boss says LLMs not enough: ‘Human level AI is not just around the corner’

Meta's chief AI scientist, Yann LeCun, said AI systems built on large language models weren't the path toward artificial general intelligence and downplayed notions that AI could be used to harm humans at scale.
Meta's chief AI scientist, Yann LeCun, said AI systems built on large language models weren't the path toward artificial general intelligence and downplayed notions that AI could be used to harm humans at scale.

Large language models (LLMs) such as ChatGPT and Claude aren’t going to lead to human-level artificial intelligence (AI) any time soon. At least not according to Yann LeCun, Meta’s chief AI scientist.

LeCun recently spoke with Time Magazine to discuss artificial general intelligence (AGI), a nebulous term used to describe a theoretical AI system that is capable of performing any task given the right resources.

While there’s no scientific consensus as to what an AI system would need to be capable of in order to be considered an AGI, LeCun’s boss, Meta CEO and founder Mark Zuckerberg, recently made waves when he announced that Meta was pivoting to the development of AGI.

“We’ve come to this view that, in order to build the products that we want to build, we need to build for general intelligence,” said Zuckerberg in a recent interview with The Verge.

Zuckerberg’s AGI, LeCun’s human-level AI

LeCun appears to disagree with Zuckerberg, at least semantically. Speaking to Time, LeCun said that he hated the term “AGI.” He prefers to call it “human-level AI,” pointing out that humans aren’t general intelligences either.

On the topic of LLMs, a class of AI that includes Meta’s Llama-2, OpenAI’s ChatGPT and Google’s Gemini, LeCun believes that they’re not even close to a cat’s intelligence. This puts them nowhere close to human intelligence.

“Things that we completely take for granted turn out to be extremely complicated for computers to reproduce. So AGI, or human-level AI, is not just around the corner, it’s going to require some pretty deep perceptual changes.”

Unbridled AI optimism

LeCun also waxed philosophical about the ongoing debate over whether open-source AI systems, such as Meta’s Llama-2, pose a threat to humanity.

He outright dismissed the idea that AI posed an outsized threat. When asked, “what if a human, who has the urge to dominate, programs that goal into the AI?” LeCun submitted that if such a “bad AI” existed, then “you’ll have smarter, good AI’s taking them down.”

Related: Bitcoin looks to surpass Meta in total value as crypto climbs