The AI industry just witnessed a seismic shift. Yann LeCun—Turing Award laureate, godfather of convolutional neural networks, and former Chief AI Scientist at Meta—has launched Advanced Machine Intelligence Labs (AMI Labs) with $1.03 billion in seed funding. That’s the largest seed round in European history.
But here’s what makes this genuinely significant: LeCun isn’t building another chatbot. He’s building AI that understands the actual, physical world.
Let me break down what’s happening, why it matters, and what it tells us about where artificial intelligence is actually headed.
This analysis draws from multiple sources covering the AMI Labs announcement, including reporting from Sifted, The French Tech Journal, Crunchbase News, and official investor statements.
The Departure That Shook Meta
LeCun spent over a decade at Meta, overseeing the development of PyTorch and the Llama model series. He built FAIR (Fundamental AI Research) into one of the most respected AI research divisions in the world. Then, in late 2025, he walked away.
The reason? A fundamental disagreement about the path to artificial general intelligence.
While the rest of Silicon Valley was throwing billions at scaling language models, LeCun was increasingly vocal that this approach was a dead end. His argument was blunt: large language models lack the reasoning capacity of even a common house cat. They’re impressive pattern matchers trained on text, but text is a narrow slice of human knowledge.
Think about it this way—a child learns more about the physical world in the first few minutes of life than an LLM learns from the entire corpus of internet text. The child experiences continuous sensory data from a three-dimensional environment. The LLM gets tokens.
LeCun reportedly told Meta CEO Mark Zuckerberg that reaching human-level intelligence through scaling LLMs was “complete nonsense” and that he could achieve the necessary breakthroughs “faster, cheaper, and better” outside the company.
So he left. And he took a significant portion of FAIR’s talent with him.
What AMI Labs Is Actually Building
The technical heart of AMI Labs is something called the Joint Embedding Predictive Architecture, or JEPA. If that sounds academic, let me explain why it matters.
Current generative AI systems—the ones powering ChatGPT, Midjourney, and their competitors—work by predicting the next piece of data. For language models, that’s the next word. For image generators, it’s reconstructing pixels. The problem is they’re trying to predict everything, including irrelevant noise.
JEPA takes a fundamentally different approach. Instead of predicting raw data, it predicts representations of future states. An encoder maps an input (like a video frame) into a compressed representation. The predictive model then calculates what the next state’s representation should look like, given the current state and an action.
Why does this matter? Because it allows the system to ignore unpredictable details—the specific texture of rain, the random movement of grass in the wind—while accurately forecasting what actually matters: the trajectory of a pedestrian about to cross the street, the motion of a robotic arm picking up an object.
This is the difference between a system that can describe a warehouse and a system that can actually work in one.
The World Model vs. Language Model Divide
The distinction between world models and language models isn’t just technical—it’s philosophical.
Language is discrete and low-dimensional. It’s composed of a finite set of tokens organized according to grammatical rules. Predicting the next token is a powerful shortcut for mimicking intelligence, but it doesn’t require understanding the underlying reality.
The physical world is continuous, high-dimensional, and noisy. Sensory data from cameras and LIDAR don’t come with pre-defined tokens. They’re a flood of raw signals that must be interpreted through a model of physics and causality.
Consider the implications for different industries:
| Feature | Large Language Models | AMI World Models |
|---|---|---|
| Training Data | Text, code, books (symbolic) | Video, sensors, interactions (physical) |
| Predictive Unit | Next token (discrete) | Future state in representation space (continuous) |
| Reasoning | Probabilistic pattern matching | Simulation-based cause and effect |
| Operational Field | Screens, digital agents | Robotics, physical agents |
In creative writing, a “hallucination” might be viewed as artistic license. In a hospital or a factory, it’s a catastrophic failure. AMI Labs is building systems where reliability and controllability are architectural features, not afterthoughts bolted on through post-training filters.
Who’s Backing This Bet
The $1.03 billion seed round reads like a who’s who of global strategic capital:
Financial leads: Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Jeff Bezos’s personal investment vehicle.
Semiconductor and infrastructure: NVIDIA—which is particularly noteworthy. While NVIDIA’s recent growth was fueled by LLM training demand, their investment in AMI Labs signals a strategic pivot toward “Physical AI.” World models processing continuous sensor data require different compute profiles than text-based transformers.
Industrial and consumer tech: Samsung, Toyota Ventures, Sea, and Dassault. Early access to world models for robotics and automotive applications.
Sovereign and institutional: Temasek, Bpifrance (France’s sovereign investment arm), SBVA, and Shorooq. For these investors, AMI Labs represents strategic autonomy in AI—an alternative to US-dominated LLM infrastructure.
Private luminaries: Jeff Bezos, Eric Schmidt, Mark Cuban, Xavier Niel, and Tim Berners-Lee.
The geographic distribution—spanning France, Germany, the UK, the US, Singapore, South Korea, and the UAE—highlights a global consensus that world models represent a significant technological frontier.
The Leadership Team
AMI Labs isn’t a research vanity project. The leadership team is built for execution:
Alexandre LeBrun (CEO): Serial entrepreneur with deep NLP and clinical AI expertise. Founded Wit.ai (acquired by Meta) and Nabla, a healthcare AI company. He’s transitioning from Nabla CEO to AMI CEO while remaining Nabla’s Chairman—signaling immediate focus on high-stakes, real-world deployment.
Yann LeCun (Executive Chairman): The scientific vision. He retains his academic post at NYU, maintaining the bridge between fundamental research and corporate application.
Saining Xie (Chief Science Officer): Former Google DeepMind and NYU, with expertise in visual representation learning. Key contributor to Solaris and Diffusion Transformer models.
Pascale Fung (Chief Research and Innovation Officer): Former Meta senior director and HKUST professor, specializing in human-centered AI and multimodal systems.
Mike Rabbat (VP of World Models): Former Meta research science director, tasked with implementing JEPA at scale.
The team started at roughly 12 researchers with plans to scale to 50 within six months. Lean but elite.
Where World Models Will Land First
AMI Labs has identified clear industry verticals where LLM limitations are most consequential:
Robotics and Manufacturing
Current industrial robots often run on pre-programmed scripts or simple reinforcement learning models that struggle in dynamic environments. World models could give robots “common sense”—understanding of object permanence, gravity, and the likely consequences of their movements.
In manufacturing, this means adaptive industrial process control and automated assembly where machines adjust to variations without manual re-programming. Toyota and Samsung are particularly interested in how this applies to domestic robots that can navigate homes as effectively as humans.
Healthcare
AMI Labs has established an exclusive partnership with Nabla. The vision goes beyond ambient AI documentation to what they’re calling “Agentic Healthcare AI”—systems that don’t just generate text but autonomously execute multi-step workflows across fragmented hospital infrastructures.
The ability to process continuous multimodal data—physiological monitoring, imaging, audio—makes world models uniquely suited for clinical environments where hallucination isn’t just embarrassing, it’s dangerous.
Automotive and Aerospace
Level 5 autonomous driving requires a vehicle to not only see the road but understand the intent of other drivers and pedestrians. A world model can predict future positions of objects in representation space, providing more robust planning than current computer vision systems.
The European AI Moment
AMI Labs being headquartered in Paris isn’t accidental. For years, European investors and policymakers have worried about the continent’s dependence on American AI platforms. LeCun is positioning Europe to lead what he calls the “second wave” of AI—moving from digital assistants to physical automation.
This plays to European strengths: industrial production, high-precision engineering, and robotics. The commitment to open source is both scientific conviction and strategic positioning—establishing JEPA as the “Linux of AI” and building a global ecosystem that could offset the compute and data advantages of larger incumbents.
What This Means Going Forward
LeCun and LeBrun have been clear that AMI Labs isn’t a typical applied AI startup. The first year is dedicated to fundamental R&D and scaling their foundational model (tentatively named “AMI Video”). Corporate partnerships and industrial deployment come in 2027 and beyond.
The two biggest technical hurdles they’re tackling:
Persistent memory: Most LLMs have a context window that functions as short-term memory. Once a conversation exceeds it, the model “forgets.” AMI Labs is designing systems with persistent latent memory that accumulates knowledge over time—like humans and animals do.
Hierarchical planning: The ability to break a complex goal into sequences of sub-tasks, simulating them in representation space and adjusting in real-time as the environment changes. This is the prerequisite for true autonomy in robotics and long-range agentic behavior.
The long-term economic implication is what they call the transition from “Software as a Service” to “Service as a Software”—activities traditionally performed by humans (coordinating clinical trials, managing supply chains, performing data center maintenance) automated by AI agents with robust understanding of real-world constraints.
The Bottom Line
The launch of AMI Labs marks the beginning of the “Autonomous AI” era. Whether LeCun’s thesis is correct—that the path to AGI runs through physical-world understanding rather than text prediction—remains to be proven. But with a billion dollars in backing from strategic investors who aren’t in the business of charity, the market is clearly betting on world models as the foundation for the next industrial revolution.
If you’re in robotics, healthcare, manufacturing, or automotive, this is worth watching closely. The chatbot era may have been the warm-up act.
Want to stay ahead of AI developments that actually matter for your business? Subscribe to the Sola Fide newsletter for strategic technology insights delivered weekly.