World Models AI: The $1 Billion Bet That LLMs Are Wrong

Yann LeCun raised $1.03 billion for AMI Labs to build world models AI. His claim: LLMs are a dead end. Here is why this matters for every builder.

Share

World Models AI: The $1 Billion Bet That LLMs Are Wrong

World models AI could change every tool you use in the next five years. That is the big bet behind AMI Labs. Yann LeCun left Meta, raised $1.03 billion, and said the whole LLM setup is broken at its core. He might be right. And the smart money agrees.

Additionally, in late 2025, the Turing Award winner said this on stage. “The path to super smart AI via. LLMs is total nonsense. It will never work.” Four months later, he got a billion dollars to prove it. This is the biggest split in AI right now.

The Core Case Against Large Language Models

Furthermore, leCun’s point is not about model size or training data. It goes much deeper. Language models learn trends in text. Text is a compact form of the real world. But it is not the real world. That gap is the whole problem.

Moreover, think of a simple case. An LLM reads millions of physics books. It can say Newton’s laws by heart. But it does not “get” that pushing a table moves the cup on it. It has stored words, not real links between things.

Also, LLMs learn what goes together, not what causes what. They guess the next word based on patterns. This looks like thinking. In truth, the model has no inner map of how the real world works at all.

As a result, LLMs fail at tasks that need real logic. They make up facts because they have no tie to the real world. They can not plan well because plans need cause-and-effect thinking. These are not bugs. They are design limits baked into the setup.

World Models AI Uses a Totally New Setup

However, aMI Labs is making world models AI based on JEPA. That stands for Joint Embedding Predictive Architecture. LeCun first pitched this idea in 2022. Now he has the cash to build it at full scale. The goal is bold. Build AI that gets the real world.

Specifically, instead of guessing the next word, world models guess how real life will change. They learn compact maps of how spaces work. Then they predict what comes next based on deep patterns, not just surface level ones.

Here is the key split. Give an LLM the phrase “I dropped the glass and it…” The model guesses the next word: “broke.”. That is pattern matching at work. Give a world model a 3D scene of a glass falling. It maps the path, the impact force, and the result based on physics.

JEPA models skip raw pixels and raw text. Instead, they predict what things mean at a higher level. This matters because raw guessing wastes compute power. Guessing meaning is far more lean than guessing each tiny detail in a frame.

In simple terms, world models learn the rules of the real world. They skip the words we use to talk about it. A two-year-old gets cause and effect after a few years of play. LLMs never reach this level no matter how much data they eat.

Inside the Record $1.03 Billion Seed Round

AMI Labs raised its seed round at a $3.5 billion valuation before the money came in. This is the biggest seed round in all of European startup history. The firm is based in Paris. LeCun is the executive chair. He also still teaches at NYU.

The investor list shows where the smart cash is heading. Nvidia put in money as a strategic partner. Samsung and Toyota Ventures joined too. On the personal side, Eric Schmidt, Mark Cuban, and Tim Berners-Lee all wrote checks. These are not trend chasers.

Bezos Expeditions also got in on the deal. When Jeff Bezos bets on a firm that wants to kill the whole LLM model, it means something. These backers are not chasing hype. They are guarding against the chance that LLMs hit a wall soon.

The CEO is Alex LeBrun. He used to work at Meta. Before that, he started Nabla, a health AI firm. LeBrun has been very honest about the risks. He says “world models” will turn into the next hot buzzword in six months. Every AI firm will claim to use them.

Why This Fight Matters Even If LeCun Loses

There is a real chance that world models AI does not work at big scale. JEPA has not been tested in the real world yet. Going from a lab paper to a real product is very hard. LeCun himself says so. No one knows if this will pan out.

But the attempt alone creates huge value. First, it makes the LLM camp deal with known flaws. Making up facts, bad logic, and no grip on the real world are real issues. Pressure from a well-funded rival speeds up fixes.

Second, mixed approaches will likely show up. Picture AI that joins LLM language skills with world model logic. In fact, some teams are testing this blend right now. The best AI of 2028 might use both setups at once.

Third, this deal proves that investors will fund new AI ideas. For years, LLM scaling took all the money. Other good paths starved for cash. AMI Labs shows there is room for big bets on new ways of doing things.

Fourth, it shifts the whole debate. For the past three years, AI meant LLMs. That was it. Now we have a credible other path with a billion dollars behind it. That opens doors for every researcher with a non-LLM idea.

What This Means for AI Builders and Founders

If you build on LLMs today, stay calm. LLMs will still be useful for years. They are great at text tasks, code writing, and creative work. Nothing AMI Labs ships will replace ChatGPT next quarter or even next year.

But pay attention to the big picture. LLMs have known caps. Making up facts is not a data issue. Logic failures are not a scale issue. These are design limits that more compute can not fix. That is LeCun’s whole point.

So, smart builders should stay open to new setups. Do not tie your whole firm to one AI model type. Build systems that can swap the brain inside. Design around what the tool can do, not which model family it comes from.

Similarly, watch the robot and self-driving car worlds closely. World models AI matters most where real-world grasp is key. If JEPA delivers, robots will be the first to gain. That market alone makes the billion dollar bet worth it.

Also, think about your own data moat. Whether LLMs or world models win, the firms with the best data will come out on top. Custom data sets, user feedback loops, and field-specific tuning all hold up no matter which model sits at the core.

The Biggest Schism in AI Is Here

In the end, this is about two visions for AI’s future. One camp thinks scaling LLMs will lead to truly smart AI. The other camp thinks we need a whole new design. LeCun leads the second group with a billion dollars and a Turing Award behind him.

Both sides have good points. LLM scaling has made amazing things happen. But gains in logic and truth have slowed down a lot. Meanwhile, world model research shows early promise. Neither side has the final proof yet.

This rivalry helps all of us. More types of AI research lead to better tools. The LLM bubble was getting risky for the whole field. Now we have a well-funded other path pushing in a fresh way.

The next two years will show which path wins. Or more likely, both will find key roles to play. Either way, a Turing Award winner just bet his whole name on killing the LLM model. That is worth your time to think about.

For additional context, see OpenAI’s research on AI capabilities.