A mouse can explore a new environment, find food and adapt when the rules change, all using less energy than a lightbulb. Meanwhile, our most powerful computers can solve chess and master protein folding, but still can’t walk across a messy room without crashing into a chair.
This contrast reveals something profound about intelligence itself and where we need to go next. As we celebrate Geoffrey Hinton and John Hopfield’s recent Nobel Prize in Physics for their foundational work on neural networks, it’s the perfect time to ask: what’s the next chapter in understanding intelligence?
The Great Intelligence Paradox
We’re living through what some call the “Great Intelligence Paradox.” Our most advanced computational systems can master protein folding and beat world champions at Go, tasks that require incredible sophistication. But they’re surprisingly brittle when faced with the kind of flexible, real-world intelligence that any animal takes for granted.
Consider this: no machine can build a nest, forage for berries, or care for young. Today’s computational systems cannot compete with the sensorimotor capabilities of a four-year old child or even simple animals. The reason isn’t that we lack computational power. It’s that we’ve been approaching intelligence from a different angle.
As researcher Hans Moravec put it, abstract thought “is a new trick, perhaps less than 100 thousand years old, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge.” In other words, when trying to capture natural intelligence, we’ve been focusing on the penthouse without first understanding the foundation.
The Deep History of NeuroAI: A 70-Year Symbiosis
This realization has sparked the emergence of NeuroAI, a field that recognizes something remarkable: evolution has already solved many of the problems we’re struggling with in artificial intelligence. But the connection between neuroscience and computing isn’t new. It can be traced to the very foundations of modern computer science itself.
John von Neumann’s seminal 1945 report outlining the first computer architecture (EDVAC) dedicated an entire chapter to discussing whether the proposed system was sufficiently brain-like. Remarkably, the only citation in this foundational document was to Warren McCulloch and Walter Pitts’ 1943 paper, widely considered the first work on neural networks. This early cross-pollination between neuroscience and computer science set the stage for decades of mutual inspiration.
The relationship deepened with Frank Rosenblatt’s introduction of the perceptron in 1958. The revolutionary idea here wasn’t just that machines could learn, but that they should learn from data rather than being explicitly programmed. Rosenblatt established synaptic connections as the primary locus of learning in artificial neural networks, a concept heavily influenced by Donald Hebb’s 1949 work highlighting the importance of the synapse as the physical basis of learning and memory.
This neuroscience-inspired principle that synapses are the plastic elements of neural networks has remained absolutely central to modern computation. Even when Marvin Minsky and Seymour Papert’s 1969 critique of perceptrons triggered the first “neural network winter,” the core insight persisted.
The symbiosis between artificial and biological neural network research has produced numerous breakthrough success stories. Perhaps the most celebrated is the convolutional neural network (CNN), which powers many of today’s most successful artificial vision systems. CNNs were directly inspired by David Hubel and Torsten Wiesel’s model of the visual cortex, work that earned them a Nobel Prize more than four decades ago.
Another home run is reinforcement learning, which has driven groundbreaking achievements including Google DeepMind’s AlphaZero and AlphaGo. The computational principles underlying these systems mirror the dopamine-mediated learning circuits in biological brains. When a monkey reaches for a reward and receives more than expected, dopamine neurons fire in patterns that precisely match the temporal difference learning algorithms used in these game-playing systems.
More recently, the concept of “dropout” has gained prominence in artificial neural networks. This technique, in which individual neurons are randomly deactivated during training to prevent overfitting, draws inspiration from the brain’s use of stochastic processes. By mimicking the occasional misfiring of neurons, dropout encourages networks to develop more robust and resilient representations.
Critically, this relationship is truly mutualistic, not parasitic. Computational advances have revolutionized neuroscience as much as neuroscience has inspired computation. Artificial neural networks now form the backbone of state-of-the-art models of the visual cortex. The success of these models in solving complex perceptual tasks has generated new hypotheses about how biological brains might perform similar computations.
Why Animals Are the Ultimate Intelligence Teachers
Instead of trying to replicate what makes humans special, we should look at what makes all animals successful. These are the capabilities that have been tested and refined over 500 million years of evolution.
This is where Tony Zador and his colleagues propose the “embodied Turing test.” The idea is straightforward but profound: instead of asking whether computation can fool us in conversation, we should ask whether an artificial beaver can build a dam as skillfully as a real one, or whether an artificial squirrel can navigate through trees with the same agility.
This shift in perspective reveals three crucial capabilities that current computational systems lack:
They Engage Their Environment
The defining feature of animals is their ability to move around and interact with their environment in purposeful ways. It’s about understanding how actions affect the world and using that understanding to achieve goals.
Consider the computational challenge this represents. When you watch a cat stalking prey, you’re witnessing real-time integration of visual tracking, motor prediction, uncertainty estimation, and action selection. The cat must predict the prey’s trajectory, estimate the optimal interception point, account for its own motor delays, and continuously update its strategy as the situation evolves. This requires what computational scientists call forward models, inverse models, and optimal control, all running simultaneously in a brain that weighs 30 grams.
Or take nest building in birds. A Baltimore oriole weaves together hundreds of individual grass fibers, each requiring precise motor control and spatial reasoning. The bird must estimate structural integrity in real-time, adapt to varying material properties, and maintain a global architectural plan while executing thousands of local actions. No current robotic system can approach this level of sensorimotor sophistication.
They Behave Flexibly
Animals are born with most of the skills needed to thrive or can rapidly acquire them from limited experience, thanks to their strong foundation in real-world interaction, courtesy of evolution and development. Unlike computational systems that catastrophically fail when encountering scenarios outside their training data, animals excel at handling novel situations by drawing on their general understanding of how the world works.
This flexibility emerges from what neuroscientists call compositional representation. Rather than memorizing specific stimulus-response patterns, animals build internal models of causal structure that can be recombined in novel ways. A squirrel encountering an unfamiliar tree can still navigate it by applying general principles of branch mechanics, gravity, and momentum.
Recent work by Rajalingham and colleagues has provided a striking demonstration of this principle. They trained monkeys to play “mental Pong,” where a ball disappeared behind a barrier and the animal had to predict where it would emerge. Neural recordings from the monkeys’ frontal cortex revealed that the brain was running a mental physics engine, maintaining an internal trajectory that matched physical reality even when the ball was invisible.
Even more remarkably, when computational systems were trained on the same task but required to infer the ball’s hidden path, they produced patterns of activity that mirrored the monkey frontal cortex. This suggests that both biological and artificial systems converge on similar computational solutions when solving similar problems, but biological systems achieve this with far greater efficiency and flexibility.
They Compute Efficiently
Here’s a staggering comparison that reveals the depth of the efficiency gap: training a large language model such as GPT-3 requires over 1000 megawatt-hours, enough electricity to power a small town for a day. The human brain uses about 20 watts, roughly the same as a bright light bulb.
This efficiency gap points to fundamentally different computational principles. Biological circuits operate in a regime where spikes are sparse and energy-efficient, using asynchronous communication protocols that bear little resemblance to the synchronous, dense matrix operations that characterize current computational systems.
The brain achieves this efficiency through several key innovations. First, it uses event-driven computation, where neurons only consume energy when they have something important to communicate. Second, it employs local learning rules that don’t require global coordination or backpropagation of error signals. Third, it multiplexes different types of information in the same circuits, allowing the same neural hardware to support multiple functions depending on context.
Recent advances in neuromorphic engineering are beginning to capture some of these principles. Intel’s Loihi chip and IBM’s TrueNorth processor implement spiking neural networks that dramatically reduce power consumption for certain tasks. But we’re still far from achieving the full computational elegance of biological systems.
Our Research: Natural Architectures for Cognitive Flexibility
This broader NeuroAI vision connects directly to collaborative research efforts my colleagues and I have been pursuing through the Thalamus Conte Center at Princeton. Working alongside talented investigators, we’ve been studying how thalamic circuits, particularly the mediodorsal thalamus, regulate uncertainty and cognitive flexibility.
The thalamus has long been thought of as a simple relay station, passively transferring information between brain regions. Our work reveals a far more sophisticated picture: the thalamus acts as a regulator of cortical representations, actively regulating the flow of information based on context, confidence, and computational demands.
Recent findings show that the mediodorsal thalamus exhibits distinct coding properties from prefrontal cortex. While prefrontal areas represent information in high-dimensional, mixed formats that can support many different behaviors, the thalamus compresses this information into lower-dimensional representations focused on key contextual variables like task rules and uncertainty estimates.
This architectural arrangement resembles what computational scientists call “regularization,” where a system constrains its processing to focus on the most relevant dimensions of a problem. The thalamus appears to provide this kind of regularization to prefrontal networks, helping them avoid getting lost in irrelevant details while maintaining the flexibility to handle novel situations.
This has direct implications for understanding psychiatric disorders. Schizophrenia, for instance, involves difficulties with cognitive flexibility and context processing. Our work suggests that these may reflect specific disruptions in thalamic computation rather than global deficits in learning or reasoning.
Understanding how evolution solved the uncertainty problem in biological brains could be the key to creating computational systems that are truly adaptive and robust in the face of novel situations. Current systems struggle precisely because they lack principled ways to handle uncertainty and adjust their confidence based on context.
The Road Ahead: From Lab to Life
The implications of this NeuroAI approach extend far beyond academic laboratories. The convergence of insights from biological intelligence and computational innovation points toward systems that could:
Adapt like animals: Robots that learn to navigate new environments with the flexibility of a mouse exploring new territory. Imagine search and rescue robots that can adapt to novel disaster scenarios, or autonomous vehicles that can handle completely unprecedented road conditions by drawing on fundamental principles of navigation and obstacle avoidance rather than memorized patterns.
Learn efficiently: Systems that acquire new skills from limited examples, like how animals quickly adapt to new food sources or threats. A key insight from biological learning is the importance of strong inductive biases, the built-in assumptions that help guide learning in the right direction. Animals don’t start from scratch; they leverage millions of years of evolutionary optimization.
Handle uncertainty gracefully: Systems that know when they don’t know, actively seeking information to improve their decisions rather than confidently making wrong choices. This requires implementing something like the thalamic uncertainty computation we’ve been studying, a principled way to calibrate confidence and adjust exploration strategies based on current knowledge state.
Integrate seamlessly: Computation that works alongside humans as naturally as animals coordinate in flocks or herds. This requires understanding not just individual intelligence but collective intelligence, how multiple agents can share information and coordinate actions without centralized control.
Recent experimental work provides concrete examples of how these principles might be implemented. Researchers at DeepMind have developed systems that can learn to play multiple Atari games using the same general-purpose algorithm, rather than requiring game-specific training. Their success comes from incorporating biological principles like replay (reactivating and reorganizing memories during rest) and curiosity-driven exploration.
Similarly, researchers at OpenAI have shown that large language models can exhibit emergent reasoning capabilities when scaled up, suggesting that some aspects of flexible intelligence might emerge from sufficient computational scale combined with appropriate architectural principles.
But perhaps the most promising developments come from robotics, where researchers are beginning to implement embodied learning principles. Boston Dynamics’ robots can navigate complex terrain and recover from perturbations in ways that would have been impossible just a few years ago. Their success comes from combining traditional control theory with machine learning approaches that can adapt to novel situations.
A New Kind of Intelligence
Building models that can pass the embodied Turing test requires more than tweaking existing algorithms. As Zador and colleagues argue, we need a “large-scale effort to identify and understand the principles of biological intelligence and abstract those for application in computer and robotic systems.”
Two key insights emerge from this challenge. First, intelligence isn’t about building internal representations—it’s about discovering affordances, the opportunities for action that emerge from the interaction between an agent and its environment. Second, animals don’t just learn; they develop, with their learning capabilities changing over time. Understanding how biological systems bootstrap from simple reflexes to sophisticated reasoning could transform how we build adaptive computational systems.
The convergence of neuroscience and computation offers concrete opportunities for progress. Animals solve computational problems that current systems struggle with, using principles refined over hundreds of millions of years of evolution. The mouse exploring a maze demonstrates flexible navigation, efficient learning from limited experience, and robust generalization. These capabilities emerge from biological circuits that balance exploration with exploitation, build and update internal maps, and adapt to novel situations.
Progress will require sustained collaboration between neuroscientists, computer scientists, and engineers. The questions are concrete: How do biological systems achieve such efficiency? What computational principles underlie adaptive behavior? How can we implement these in artificial systems?
Want to dive deeper into these ideas? Join us at CNS2025 in Florence, Italy (July 5-9, 2025) for our NeuroAI workshop, where we’ll explore how the convergence of neuroscience and computation is shaping the future of both fields. More details at cnsorg.org/cns-2025.
References
Zador, A., Escola, S., Richards, B., Ölveczky, B., Bengio, Y., Boahen, K., Botvinick, M., Chklovskii, D., Collins, A., Doya, K., Hassabis, D., Kording, K., Konidaris, G., Marblestone, A., Olshausen, B., Pouget, A., Sejnowski, T., Simoncelli, E., Solla, S., Sussillo, D., Tsao, D., & Tsodyks, M. (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. Nature Communications, 14, 1597. https://doi.org/10.1038/s41467-023-37180-x
Zador, A. (2024). NeuroAI: A field born from the symbiosis between neuroscience and computation. The Transmitter. https://www.thetransmitter.org/neuroai/neuroai-a-field-born-from-the-symbiosis-between-neuroscience-ai/
Rajalingham, R., Sohn, H. & Jazayeri, M. (2025). Dynamic tracking of objects in the macaque dorsomedial frontal cortex. Nature Communications, 16, 346. https://doi.org/10.1038/s41467-024-54688-y
Thalamus Conte Center. (2024). Princeton University. https://conte.thalamus.princeton.edu/
Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160(1), 106-154.
von Neumann, J. (1945). First Draft of a Report on the EDVAC. University of Pennsylvania.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5(4), 115-133.
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386-408.
Hebb, D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. Wiley.
Minsky, M., & Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press.
Moravec, H. (1988). Mind Children: The Future of Robot and Human Intelligence. Harvard University Press.