
The gap between today’s AI and genuine reasoning might come down to something every good teacher already knows: you can’t just tell someone something and expect them to understand it. They have to figure it out themselves.
The Old Idea That’s New Again
Almost 100 years ago, a Swiss psychologist named Jean Piaget watched children learn. His big insight was simple but radical:
Kids don’t learn by being told things. They learn by doing things.
A child doesn’t understand “things fall down” because you explained gravity. They understand it because they dropped their sippy cup 47 times and watched it hit the floor every single time.
The understanding comes from the *experience*, not the explanation.
Teachers have known this forever. It’s why we have labs, not just lectures. Group projects, not just textbooks. Hands-on learning, not just PowerPoints.
So why am I bringing this up in an AI article?
Because this might be the key to the next leap in AI—from systems that complete sentences to systems that actually reason.
What Today’s AI Actually Does
Current AI (ChatGPT, Claude, etc.) learned by reading the internet. Billions of words. It got very good at one thing:
Predicting what word comes next.
That’s it. That’s the whole trick. “The cat sat on the ___” → “mat” (probably)
It turns out that if you’re really good at predicting the next word, you can do a lot of impressive things. Answer questions. Write code. Summarize documents.
But here’s the problem. Richard Sutton—one of the founders of reinforcement learning and a Turing Award winner—put it bluntly in a recent interview:
"These systems predict what a person would *say*. They don't predict what will actually *happen*."¹
That’s the gap between language completion and reasoning.
The AI knows that humans write “water flows downhill” in text. But it doesn’t understand why water flows downhill. It’s never seen water. It’s never been wet. It can’t reason about water in a new situation—it can only recall what’s been written about water before.
The New Direction: World Models
The big AI labs are now working on something different. Instead of teaching AI to predict words, they want to teach it to predict reality.
Imagine a video: a young boy chases his ball across a grassy field. He’s not paying attention. The ball is rolling fast—toward a cliff edge.
What does a world model need to predict?
The ball will fall. Simple physics.
What does a human feel watching this?
Dread. Urgency. The impulse to shout “STOP!”
Current AI can learn the first part—predicting that balls fall off cliffs. They’re calling these “world models,” AI that has an internal picture of how physics works.
But the second part—the emotional response that makes you care, that makes you act, that tells you this matters—that’s nowhere in the architecture.
This is the bridge to reasoning. If you can predict what will happen, you can plan. You can anticipate consequences. You can think through “if I do X, then Y will happen.” That’s not word completion anymore—that’s logic.
But humans don’t just reason about consequences. We feel them before they happen. That feeling is what makes reasoning matter.
Three Ways People Are Trying to Build This
Approach 1: Simulate Everything
Some companies are building giant physics simulators. Train AI on millions of hours of simulated robots walking, cars driving, objects falling. The pitch: if we simulate physics accurately enough, the AI will learn how reality works.
The catch: The AI is still just watching. It’s not doing anything. It’s like learning to cook by watching 10,000 cooking shows but never touching a pan.
Approach 2: Better Prediction
Researchers like Yann LeCun—Meta’s Chief AI Scientist and another Turing Award winner—are building systems that predict what happens next, not in words, but in abstract “meaning space.”² The AI watches video and learns to anticipate what’s coming.
LeCun has been blunt: "In the first four years of life, a child has seen 50 times more data than the largest LLMs." But that data isn't text—it's *experience*.
The catch: Still watching, not doing. Better than reading text, but still passive.
Approach 3: Let It Try Things
Some robotics labs are putting AI in bodies (real or simulated) and letting it learn from trial and error. Reach for object, miss, adjust, try again. A 2025 paper explicitly makes the connection: “World Models in AI: Sensing, Learning, and Reasoning Like a Child.”³
This is closest to how kids actually learn. It’s also the hardest and most expensive to do. But it might be the only path to genuine reasoning—because reasoning requires understanding consequences, and consequences require action.
What’s Missing from All of These
Remember the boy chasing his ball toward the cliff? An AI with a perfect world model could predict: “The ball will fall. The boy might fall too.”
But it wouldn’t feel the urgency. It wouldn’t prioritize this prediction over a thousand other predictions it could make. It would process “boy falls off cliff” with the same flat attention as “leaf blows across grass.”
Here’s what struck me: none of these approaches are building the motivational part.
When a kid learns, they’re not just processing information. They’re:
- Curious about things that seem surprising
- Frustrated when something doesn’t work
- Satisfied when they figure it out
- Scared of things that hurt them before
These feelings aren’t distractions from learning. They are the learning. They tell the kid’s brain what’s important, what to pay attention to, what to remember.
Current AI doesn’t have this. It processes everything with the same flat attention. No curiosity. No satisfaction. No “oh, that’s interesting!”
The biggest labs are building better eyes for AI. Maybe they should also be building better gut feelings.
Why This Matters If You Work in Learning & Development
1. AI is getting closer to how humans actually learn—slowly.
The tools we’ll see in 2-3 years will be less “search engine that talks” and more “system that reasons through problems.” That’s useful for training, simulation, and coaching applications.
2. Your expertise is more relevant than you think.
The AI researchers are reinventing ideas that L&D professionals have used for decades. Active learning. Scaffolding. Learning by doing. If you understand why those things work, you understand the frontier of AI research—you just use different words for it.
3. The hybrid approach still wins—for now.
AI is great at delivering content and answering questions. Humans are still better at the messy, adaptive, motivational parts of learning. The best training programs will use both, each for what they’re good at.
The Bottom Line: The Path to Reasoning
The gap between today’s AI and something like AGI isn’t just about more data or bigger models. It’s about a fundamental shift in what AI learns to predict.
- LLMs predict language → Good at completing sentences, retrieving information, mimicking expertise
- World models predict reality → Foundation for reasoning, planning, anticipating consequences
The AI labs spent years and billions teaching machines to read. Now they’re realizing that reading isn’t enough.
Understanding comes from doing—from interacting with the world and learning from what happens. That’s how you get from “what would a person say?” to “what will actually happen?”—which is another way of describing the leap from language completion to logical reasoning.
Good teachers have known this for a century. The AI researchers are catching up.
What’s your experience? Are you seeing AI tools that actually help people reason through problems, or mostly just help them find information faster?
References
-
Sutton, R. Interview with Dwarkesh Patel, 2024. The Bitter Lesson
-
LeCun, Y. (2022). “A Path Towards Autonomous Machine Intelligence.” OpenReview preprint.
-
Del Ser, J., Lobo, J.L., Müller, H., & Holzinger, A. (2025). “World Models in Artificial Intelligence: Sensing, Learning, and Reasoning Like a Child.” arXiv:2503.15168.
Allen Partridge, PhD | Director of Product Evangelism, Adobe Digital Learning Solutions