While artificial intelligence is rapidly reshaping the tech landscape, Meta’s top AI scientist, Yann LeCun, believes the current wave of AI is still missing something fundamental: true intelligence.
During the AI Action Summit in Paris earlier this year, LeCun shared some critical insights into the state of AI development and why he believes today’s models fall short of what he considers “intelligent behavior.” According to LeCun, human-level intelligence isn’t just about generating fluent language — it’s about understanding, remembering, reasoning, and planning. These capabilities, he said, are largely absent from the large language models (LLMs) powering today’s most popular AI tools.
“There are four essential elements to intelligence that every smart animal — and certainly humans — possess,” LeCun explained. These include: grasping the physical world, having persistent memory, the ability to reason logically, and the capacity to plan complex, multi-step actions.
The implication is clear: while LLMs like ChatGPT or Meta’s own LLaMA models can generate impressive-sounding answers, they lack the cognitive architecture to truly understand or think in the way humans do. LeCun is advocating for a major shift in how AI is trained — away from the current model of endless text prediction and toward systems that are grounded in real-world experience.
At Meta, one solution being explored is the concept of “world-based models.” These would be AI systems that aren’t just mimicking language patterns but are learning to predict real-world outcomes. In other words, instead of simply guessing the next best word, these models would simulate possible futures — imagining how an action might change the world around them. That kind of ability, LeCun argues, is a cornerstone of human intelligence.
He also pointed to Meta’s research into tools like Retrieval-Augmented Generation (RAG) — which enhances AI outputs by tapping into external knowledge bases — and V-JEPA, a non-generative model trained by filling in missing video frames, rather than generating content from scratch. These approaches, LeCun believes, are small but meaningful steps toward more grounded, contextual AI.
Yet even as Meta works to reimagine AI’s foundations, it’s facing internal turbulence. According to Business Insider, the company is dealing with a significant “brain drain” from its AI research division. Of the 14 researchers who authored the first LLaMA model back in 2023, only three remain at Meta. Many of the departed have jumped ship to promising upstarts like Mistral, a Paris-based AI firm founded by ex-Meta researchers, now seen as one of the rising stars in the European AI scene.
That exodus may partially explain why Meta’s most recent release, LLaMA 4, hasn’t made the splash the company hoped for. Developers are increasingly drawn to faster-evolving competitors like OpenAI’s GPT-4o, Google’s Gemini 2.5 Pro, and Claude 4 Sonnet from Anthropic — all of which are being touted for their advanced reasoning capabilities and user experience improvements.
To make matters more complicated, The Wall Street Journal reported on May 15 that Meta is holding back the public launch of its LLaMA 4 “Behemoth” model. Whether that delay is due to strategic recalibration, technical bottlenecks, or team shakeups remains unclear.
Still, LeCun remains focused on the long game. His call for building more human-like AI — models that don’t just speak fluently but reason, remember, and plan — highlights a deeper debate within the industry. Is it better to race ahead with powerful but limited models? Or slow down and aim for something more cognitively aligned with how humans interact with the world?
For Meta, the answer could determine whether it remains a dominant player in the AI race — or gets overtaken by leaner, more innovative rivals. Either way, LeCun’s vision makes one thing clear: in the future of AI, brains might matter just as much as data.