ChatGPT can write poetry, solve complex mathematical proofs, and engage in surprisingly sophisticated conversations about philosophy. Yet it can’t figure out how to walk across an unfamiliar room without detailed instructions, navigate to the kitchen when it’s hungry, or adapt when someone moves the furniture. Meanwhile, a mouse—with a brain smaller than your thumbnail—can explore a completely new environment, remember where it found food, adapt its strategy when the rules change, and do it all while using less power than a lightbulb.
This isn’t just an amusing contrast. It reveals something profound about the current state of artificial intelligence and where we need to go next. As we celebrate Geoffrey Hinton and John Hopfield’s recent Nobel Prize in Physics for their foundational work on neural networks, it’s the perfect time to ask: what’s the next chapter in the AI revolution?
The Great AI Paradox
We’re living through what I call the “Great AI Paradox.” Our most advanced AI systems can master chess, Go, and even protein folding—tasks that require incredible computational sophistication. But they’re surprisingly brittle when faced with the kind of flexible, real-world intelligence that any animal takes for granted.
Consider this: no machine can build a nest, forage for berries, or care for young. Today’s AI systems cannot compete with the sensorimotor capabilities of a four-year old child or even simple animals. The reason isn’t that we lack computational power—it’s that we’ve been approaching intelligence from the wrong angle.
As AI pioneer Hans Moravec put it, abstract thought “is a new trick, perhaps less than 100 thousand years old… effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge.” In other words, we’ve been trying to build the penthouse of intelligence without first constructing the foundation.
Enter NeuroAI: Learning from 500 Million Years of R&D
This realization has sparked the emergence of NeuroAI—a field that recognizes something remarkable: evolution has already solved many of the problems we’re struggling with in AI. The connection between neuroscience and computing can be traced to the very foundations of modern computer science, with John von Neumann’s 1945 computer architecture report dedicating an entire chapter to discussing whether the proposed system was sufficiently brain-like.
But now we need to take this symbiosis much further. Rather than focusing on those capabilities that are especially well-developed or uniquely human, we should focus on those capabilities—inherited from over 500 million years of evolution—that are shared with all animals.
This is where Tony Zador and his colleagues propose something brilliant: the “embodied Turing test.” Instead of asking whether AI can fool us in conversation, we should ask whether an artificial beaver can build a dam as skillfully as a real one, or whether an artificial squirrel can navigate through trees with the same agility.
Why Animals Are the Ultimate AI Teachers
Animals possess three crucial capabilities that current AI systems lack:
1. They Engage Their Environment
The defining feature of animals is their ability to move around and interact with their environment in purposeful ways. This isn’t just about locomotion—it’s about understanding how actions affect the world and using that understanding to achieve goals. When you watch a cat stalking prey or a bird building a nest, you’re witnessing sophisticated real-time problem-solving that puts our best robots to shame.
2. They Behave Flexibly
Animals are born with most of the skills needed to thrive or can rapidly acquire them from limited experience, thanks to their strong foundation in real-world interaction, courtesy of evolution and development. Unlike AI systems that catastrophically fail when encountering scenarios outside their training data, animals excel at handling novel situations by drawing on their general understanding of how the world works.
3. They Compute Efficiently
Here’s a staggering comparison: training a large language model such as GPT-3 requires over 1000 megawatts-hours, enough to power a small town for a day. The human brain uses about 20 watts. This isn’t just about saving electricity—it points to fundamentally different computational principles. Biological circuits operate in a regime where spikes are sparse and energy-efficient, using asynchronous communication protocols that could inspire more efficient AI architectures.
The Physics of Next-Generation AI
Building on Hinton and Hopfield’s Nobel Prize-winning work, Fenglei Fan and Ge Wang have outlined an exciting vision for the future of AI. Their groundbreaking research suggests that new foundational architectures can be strengthened by incorporating links and loops within networks, aligning with the growing interest in surpassing Transformer architectures to enable next-generation foundational models.
This isn’t just about making AI systems bigger or faster—it’s about making them fundamentally more intelligent by incorporating what they call “dimensionality and dynamics.” By introducing dimensionality through intra-layer links and dynamics via feedback loops, network height and additional dimensions, alongside traditional width and depth, enhance learning capabilities, while entangled loops across scales induce emergent behaviors akin to phase transitions in physics.
Think of it this way: current AI architectures are like highways—efficient for moving information in straight lines, but limited in their ability to create the complex, interconnected patterns of activity that characterize biological intelligence. The next generation will be more like cities, with intricate networks of connections that enable emergent, flexible behaviors.
Our Efforts in This Domain
This broader NeuroAI vision connects directly to collaborative research efforts my colleagues and I have been pursuing through the Thalamus Conte Center at Princeton. Working alongside Sabine Kastner at Princeton, Martin Usrey at UC Davis, and other talented investigators, we’ve been studying how thalamic circuits—particularly how the mediodorsal thalamus regulates uncertainty and cognitive flexibility—might reveal fundamental principles of biological intelligence.
The thalamus, we’ve found, acts as the brain’s “uncertainty computer,” helping to calibrate confidence in our beliefs and deciding when to explore new possibilities versus exploit current knowledge. This isn’t just academic curiosity—it’s a fundamental computational problem that every intelligent system must solve. When should you stick with what’s working? When should you try something new? How do you balance confidence with curiosity?
Our recent collaborative work shows that when these thalamic circuits are disrupted, people become overconfident in their decisions and lose the ability to seek out information that could improve their choices. They get “stuck” in rigid patterns of behavior—exactly the kind of brittleness that plagues current AI systems.
Understanding how evolution solved the uncertainty problem in biological brains, through this collaborative effort spanning multiple institutions and approaches, could be the key to creating AI systems that are truly adaptive and robust in the face of novel situations.
The Road Ahead: From Lab to Life
The implications of this NeuroAI approach extend far beyond academic laboratories. Imagine AI systems that could:
- Adapt like animals: Robots that learn to navigate new environments with the flexibility of a mouse exploring a new territory
- Learn efficiently: AI that acquires new skills from limited examples, like how animals quickly adapt to new food sources or threats
- Handle uncertainty gracefully: Systems that know when they don’t know, actively seeking information to improve their decisions rather than confidently making wrong choices
- Integrate seamlessly: AI that works alongside humans as naturally as animals coordinate in flocks or herds
This isn’t science fiction—it’s the natural next step in AI development, guided by the world’s most successful intelligence systems: biological brains.
The Challenge and the Opportunity
Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI. But this requires more than just tweaking existing algorithms. We need what Zador and colleagues call a “large-scale effort to identify and understand the principles of biological intelligence and abstract those for application in computer and robotic systems.”
This is exactly the kind of challenge that excites me most as a researcher—and it’s why I’m organizing a NeuroAI workshop at CNS*2025 this July in Florence, Italy. The annual Conference on Computational Neuroscience (July 5-9, 2025) brings together leading minds from neuroscience, AI, and robotics, and our workshop will tackle these fundamental questions: How do we bridge the gap between biological and artificial intelligence? What can 500 million years of evolution teach us about building better AI? And how do we translate insights from animal cognition into practical AI applications?
The convergence of powerful new neuroscience techniques, advanced AI methods, and increasing computational resources creates an unprecedented opportunity. As artificial neural network models become more sophisticated, they provide insights into brain function; and as our understanding of brain function increases, it inspires new algorithms that push the boundaries of what artificial systems are capable of.
A New Kind of Intelligence
We stand at an inflection point in the history of artificial intelligence. The path forward isn’t just about scaling up current approaches—it’s about fundamentally rethinking what intelligence means and how to achieve it.
The basic ingredients of intelligence—adaptability, flexibility, and the ability to make general inferences from sparse observations—are already present in some form in basic sensorimotor circuits, which have been evolving for hundreds of millions of years. By learning from animals—not just humans—we can build AI systems that are not only more capable but more aligned with the biological principles that have proven successful over evolutionary time.
The future of AI isn’t about creating systems that think exactly like humans. It’s about creating systems that embody the fundamental principles of intelligence that have been tested and refined by evolution itself. In doing so, we may not only solve some of AI’s most persistent challenges but also gain deeper insights into the nature of intelligence itself—both artificial and biological.
The mouse exploring a new maze isn’t just finding cheese. It’s demonstrating principles of adaptive intelligence that could transform how we build AI systems. And that’s a lesson worth learning from.
Want to dive deeper into these ideas? Join us at CNS2025 in Florence, Italy (July 5-9, 2025) for our NeuroAI workshop, where we’ll explore how the convergence of neuroscience and artificial intelligence is shaping the future of both fields. More details at cnsorg.org/cns-2025.*
References
Fan, F., & Wang, G. (2025). Dimensionality and dynamics for next-generation artificial neural networks. Patterns, 6(1). https://doi.org/10.1016/j.patter.2024.101079
Zador, A., Escola, S., Richards, B., Ölveczky, B., Bengio, Y., Boahen, K., Botvinick, M., Chklovskii, D., Collins, A., Doya, K., Hassabis, D., Kording, K., Konidaris, G., Marblestone, A., Olshausen, B., Pouget, A., Sejnowski, T., Simoncelli, E., Solla, S., Sussillo, D., Tsao, D., & Tsodyks, M. (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. Nature Communications, 14, 1597. https://doi.org/10.1038/s41467-023-37180-x
Zador, A. (2024). NeuroAI: A field born from the symbiosis between neuroscience, AI. The Transmitter. https://www.thetransmitter.org/neuroai/neuroai-a-field-born-from-the-symbiosis-between-neuroscience-ai/
Mackenzie, G., et al. (2025). When the brain’s uncertainty computer goes offline: New human evidence for thalamic regulation of decision-making. Michael Halassa | Science. https://michaelhalassa.net/when-the-brains-uncertainty-computer-goes-offline-new-human-evidence-for-thalamic-regulation-of-decision-making/
Thalamus Conte Center. (2024). Princeton University. https://conte.thalamus.princeton.edu/
Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160(1), 106-154.
von Neumann, J. (1945). First Draft of a Report on the EDVAC. University of Pennsylvania.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5(4), 115-133.
Moravec, H. (1988). Mind Children: The Future of Robot and Human Intelligence. Harvard University Press.