<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Neuroscience &#8211; Michael Halassa | Science</title>
	<atom:link href="https://michaelhalassa.net/neuroscience/feed/" rel="self" type="application/rss+xml" />
	<link>https://michaelhalassa.net</link>
	<description>Just another Darin Hardy Site Sites site</description>
	<lastBuildDate>Mon, 01 Sep 2025 12:08:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Time is Memory</title>
		<link>https://michaelhalassa.net/time-is-memory/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Mon, 01 Sep 2025 12:08:25 +0000</pubDate>
				<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Working memory]]></category>
		<category><![CDATA[Cognitive Research]]></category>
		<category><![CDATA[Cognitive Science]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Temporal Memory]]></category>
		<category><![CDATA[Time]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=789</guid>

					<description><![CDATA[Michael Halassa discusses how the brain may create the sense of memory and why time distortions happen in experience]]></description>
										<content:encoded><![CDATA[<p>Over the past year, I’ve found a new favorite running trail. It winds through woods, follows riverbanks, and slips through an old industrial complex. The scenery shifts constantly, broken into short, distinct segments.</p>
<p>I was surprised to discover that the run takes about an hour, almost exactly the same as my old trail from the year before. The distances are nearly identical too, which makes sense given that my pace hasn’t changed. And yet, the new trail <em>feels</em> much longer. How come?</p>
<p>The old route was simpler. It had three long, straight stretches where I could see the end from the beginning. Easy to remember, easy to chunk. The new one is nothing like that: shorter segments, sharper turns, and ever-changing backdrops. Every few minutes you’re in a completely new setting, never quite sure what’s around the bend.</p>
<p>That difference got me thinking about how we perceive time. We’ve all had those strange distortions: a memory from years ago that feels recent, or something from last week that feels impossibly distant. Time in the brain is slippery.</p>
<p>So how do we actually track it? Is there an internal clock ticking away? Probably not: decades of searching haven’t turned one up. A more likely explanation is that time is tied to how memories are organized and indexed. Let’s dig into what we actually know.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!EcUq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1456w" type="image/webp" sizes="100vw" /><img fetchpriority="high" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!EcUq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f 3907 4569 a2f0" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2934757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 1"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>How Memory Creates Time</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§how-memory-creates-time" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/how-memory-creates-time"></button></div>
</div>
<p>The first clue comes from studying what happens when we remember. In a clever set of experiments, Olivier Jeunehomme and Arnaud D’Argembeau asked people to wear small automatic cameras while walking around a university campus. The cameras snapped photos every few seconds, creating an objective record of the experience. Later, participants were asked to verbally recall their walks while being audio-recorded.</p>
<p>The campus walks lasted around 40 minutes, but when participants replayed them aloud in memory, the descriptions only took about 5 minutes on average. That is roughly an eightfold compression of time.</p>
<p>The compression, however, was uneven. The researchers compared the recall transcripts to the time-stamped camera sequences and divided the narratives into what they called “experience units.” These were discrete remembered moments, such as buying a coffee, turning into a courtyard, or chatting with a classmate. Each unit was mapped back to the original footage so they could calculate how much real-world time it spanned.</p>
<p>The pattern was striking. Short, bounded activities with a clear goal, like making a purchase or opening a door, tended to be preserved in relatively high detail, replayed at about four to five times compression. In contrast, transitional stretches of locomotion, like walking from one building to the next, were compressed far more, sometimes by a factor of twenty or more. Long, uneventful stretches collapsed into a single unit, while activity-rich episodes retained much finer granularity.</p>
<p>These experience units appear to be the basic building blocks of episodic memory. The density of such units determines how long an episode feels in retrospect. More units per minute of clock time make for a richer memory trace and an expanded sense of duration. Fewer units create a thinner trace and a contracted sense of time.</p>
<p>Follow-up studies have highlighted the special role of event boundaries. Jeunehomme and D’Argembeau found that moments marking a change in context, such as entering a building, turning a corner, or meeting a person, were about five times more likely to be recalled than stretches in between. Boundaries act like bookmarks, segmenting the stream of experience and anchoring the flow of time in memory. These anchors not only determine what is remembered, but also shape how long the remembered experience feels.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!cw5r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!cw5r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9 ec2f 4660 b5d9" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3375647,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 2"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>The Paradox of Event Boundaries</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-paradox-of-event-boundaries" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/the-paradox-of-event-boundaries"></button></div>
</div>
<p>Experience units and event boundaries create a fundamental paradox in how we perceive time. Bangert and colleagues (2019, 2020) ran a series of experiments in which participants watched short films of everyday activities while making timing judgments. The films were paused at different points, and participants were asked to estimate whether a brief interval, usually around five seconds, had just passed. The twist was that sometimes the interval contained an event boundary, such as finishing washing dishes and beginning to dry them, and sometimes it did not. Intervals that contained a boundary were consistently judged as shorter than otherwise identical spans without one.</p>
<p>The mechanism behind this compression may become clearer when considering what&#8217;s happening in working memory. Swallow and colleagues (2009) tracked this directly by having participants watch movie clips while objects appeared on screen, a knife during sandwich-making, a towel during dishwashing. Five seconds later, the movie would pause for a recognition test. Objects present at event boundaries were recognized significantly better than those at non-boundaries. But this enhancement came with a cost: memory for objects from just before a boundary dropped dramatically. The boundary created a barrier, making it harder to retrieve information from the previous event even though it had occurred mere seconds earlier.</p>
<p>Event Segmentation Theory, developed by Jeffrey Zacks and colleagues in 2007, provides the framework. According to their theory, event boundaries are when the brain discards its current &#8220;event model&#8221; from working memory and uploads a new one. This updating process requires attention, which leaves fewer resources available for keeping track of time. As Bangert and colleagues (2020) demonstrated using dual-task paradigms, devoting attention to updating perceptual and conceptual features of the activity left fewer attentional resources for accumulating temporal information. It&#8217;s like trying to count seconds while also solving a puzzle &#8211; each boundary forces you to solve a new puzzle, and your counting falters.</p>
<p>The paradox is that the very same boundaries that compress time during experience expand it in memory. They serve as landmarks that structure recall, making events feel more spacious in retrospect. This dual effect helps explain a familiar puzzle: why the drive home from a new place usually feels longer than the drive there. On the outbound trip, the brain is constantly updating its models: pass the gas station (boundary), turn at the intersection (boundary), merge onto the highway (boundary). Each update reduces attention for tracking duration, so the drive feels shorter while you are in it. Yet those boundaries also create anchors that expand the memory of the trip. On the return drive the route is familiar, there are fewer surprises, and the brain needs fewer updates. With less attention diverted, duration is tracked more faithfully, so the drive feels longer in the moment but compresses more in memory.</p>
<p>Bangert and colleagues (2019) also tested temporal proximity, asking participants to judge how far apart two moments in the film felt. Boundaries made items seem further apart in time, even when the objective duration was identical. In this sense, boundaries insert psychological distance between moments. They stretch the remembered timeline even while compressing the lived experience of duration.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!REqt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!REqt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!REqt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!REqt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824 837f 43ed ad2a" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3109330,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 3"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>The Implications</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-implications" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/the-implications"></button></div>
</div>
<p>This framework explains a wide range of everyday paradoxes. Vacations, filled with novelty, fly by while they happen but expand richly in memory. Daily routines, stripped of boundaries, drag while we live them but collapse into nothing when recalled. Clewett and Davachi (2017) argued that the ebb and flow of experience itself determines the temporal structure of memory. Lositsky and colleagues (2016) showed that the greater the number and diversity of boundaries, the more time expands in recall.</p>
<p>It explains my running puzzle. My old trail was made up of long, predictable stretches, so it generated relatively few event boundaries. My new trail, by contrast, forced segmentation at every turn: woods to riverbank, riverbank to industrial ruins, sharp corner, sudden hill, unexpected vista. Each transition became a boundary, a new chunk in memory. The clock says both trails take about an hour, but memory disagrees. The old one collapses into a few coarse segments, while the new one expands into a much longer-feeling journey.</p>
<p>The principle is simple: if you want something to feel substantial in memory, add boundaries. Change contexts, vary activities, create moments that require updates. If you want time to flow by quickly, keep it continuous and predictable.</p>
<p>But the implications go deeper than personal experience design. This mechanism may explain why time seems to accelerate as we age. Childhood is packed with firsts, each creating boundaries: first day of school, first sleepover, first kiss. Adult life, especially in stable careers and relationships, can become a series of similar days bleeding into each other. The years feel shorter not because our metabolism changes or because of some cosmic injustice, but because we&#8217;re creating fewer distinct memory segments.</p>
<p>The brain doesn&#8217;t keep time like a clock. It builds time from its internal dynamics. The elasticity of time isn&#8217;t an illusion; it&#8217;s how the mind constructs a temporal dimension from the boundaries of experience.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490 94cf 4fd0 9d0f" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3086532,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 4"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<div>
<hr />
</div>
<p><em>If you enjoyed this piece, let me know. I’d love to hear how you’ve experienced time stretching or compressing in your own life. I’ll also be following up with another post that digs into the neural substrates of time perception, exploring how brain circuits generate these distortions.</em></p>
<p><em>If you’d like to read that when it comes out, consider subscribing or sharing this piece with someone who might find it interesting.</em></p>
<div>
<hr />
</div>
<h2 class="header-anchor-post"><strong>Bibliography</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§bibliography" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/bibliography"></button></div>
</div>
<p>Bangert, A. S., Kurby, C. A., Hughes, A. S., &amp; Carrasco, O. (2019). Crossing event boundaries changes prospective perceptions of temporal length and proximity. <em>Attention, Perception, &amp; Psychophysics</em>, 81(8), 2459-2472.</p>
<p>Block, R. A., &amp; Zakay, D. (1997). Prospective and retrospective duration judgments: A meta-analytic review. <em>Psychonomic Bulletin &amp; Review</em>, 4(2), 184-197.</p>
<p>Clewett, D., &amp; Davachi, L. (2017). The ebb and flow of experience determines the temporal structure of memory. <em>Current Opinion in Behavioral Sciences</em>, 17, 186-193.</p>
<p>Jeunehomme, O., &amp; D&#8217;Argembeau, A. (2020). Event segmentation and the temporal compression of experience in episodic memory. <em>Psychological Research</em>, 84(2), 481-490.</p>
<p>Lositsky, O., Chen, J., Toker, D., Honey, C. J., Shvartsman, M., Poppenk, J. L., &#8230; &amp; Norman, K. A. (2016). Neural pattern change during encoding of a narrative predicts retrospective duration estimates. <em>eLife</em>, 5, e16070.</p>
<p>Swallow, K. M., Zacks, J. M., &amp; Abrams, R. A. (2009). Event boundaries in perception affect memory encoding and updating. <em>Journal of Experimental Psychology: General</em>, 138(2), 236-257.</p>
<p>Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S., &amp; Reynolds, J. R. (2007). Event perception: A mind-brain perspective. <em>Psychological Bulletin</em>, 133(2), 273-293.</p>
<p>Zacks, J. M., Kurby, C. A., Eisenberg, M. L., &amp; Haroutunian, N. (2011). Prediction error associated with the perceptual segmentation of naturalistic events. <em>Journal of Cognitive Neuroscience</em>, 23(12), 4057-4066.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Brain&#8217;s &#8220;What If&#8221; Engine: Why Counterfactuals Are Key to Human Intelligence</title>
		<link>https://michaelhalassa.net/counterfactuals-human-intelligence/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Sun, 03 Aug 2025 23:04:06 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Prefrontal cortex]]></category>
		<category><![CDATA[Working memory]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[Recurrent Neural Networks]]></category>
		<category><![CDATA[research paper]]></category>
		<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=785</guid>

					<description><![CDATA[Michael Halassa discusses recent work on counterfactual reasoning and its contribution to human cognition]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve always been fascinated by the kinds of thoughts we <em>don&#8217;t</em> act on. In psychiatry, they shape regret, resilience, and rumination. In neuroscience, they reveal a deep truth about how the brain handles uncertainty. Every morning when I&#8217;m running late, I catch myself thinking: &#8220;If only I&#8217;d left five minutes earlier.&#8221; It&#8217;s a fleeting thought, but it represents one of the most computationally sophisticated processes our brains perform: imagining alternative realities that never happened.</p>
<p>Every day, your brain performs millions of &#8220;what if&#8221; calculations without you even noticing. What if I had taken the other route to work? What if I hadn&#8217;t said that in the meeting? What if the ball bounces differently than expected? This capacity for <strong>counterfactual reasoning</strong>, imagining alternative realities that never actually occurred, represents one of the most sophisticated computational achievements of biological intelligence.</p>
<p>A groundbreaking new study published in <em>Nature Human Behaviour</em> by Ramadan, Tang, Watters, and Jazayeri has shed new light on why humans rely on these mentally expensive &#8220;what if&#8221; simulations, revealing computational constraints that force our brains into remarkably clever problem-solving strategies. Their findings illuminate human cognition and change how we understand intelligence itself.</p>
<h2>The Computational Mystery: Why Do We Think in &#8220;What Ifs&#8221;?</h2>
<p>From a purely computational standpoint, counterfactual reasoning seems inefficient. When facing complex decisions, optimal algorithms should simply compute the joint probability of all possible outcomes and pick the best option. So why do humans constantly engage in the seemingly wasteful exercise of imagining alternatives?</p>
<p>The answer, as Ramadan and colleagues discovered, lies in the fundamental constraints that shape how our brains process information. Using an ingenious H-maze task where participants had to track an invisible ball through branching pathways, they uncovered three critical computational bottlenecks that force human cognition into hierarchical and counterfactual strategies:</p>
<p><strong>1. Parallel Processing Bottleneck</strong>: Our brains cannot track all possible trajectories simultaneously. We must break complex problems into sequential, hierarchical steps.</p>
<p><strong>2. Counterfactual Processing Noise</strong>: When we engage in &#8220;what if&#8221; thinking, our working memory introduces noise that degrades the fidelity of these mental simulations.</p>
<p><strong>3. Rational Resource Allocation</strong>: Humans adaptively adjust their reliance on counterfactuals based on how much these mental simulations cost them.</p>
<h2>Very Clever Use of Recurrent Neural Networks in Modeling Features of the Human Mind</h2>
<p>The research reveals profound insights about intelligence itself. When Ramadan et al. created artificial neural networks and subjected them to the same computational constraints humans face, something remarkable happened: only the networks constrained by all three bottlenecks reproduced human-like behavior.</p>
<p>This finding demonstrates the power of using recurrent neural networks to model human cognition. By constraining artificial networks with the same limitations that shape human thinking, Ramadan et al. created systems that behave remarkably like people. The key insight is that RNNs can capture mental processes like hierarchical and counterfactual reasoning when they face the same computational bottlenecks humans do.</p>
<h3>Neural Architecture of Counterfactual Reasoning</h3>
<p>The neural implementation of counterfactual reasoning tells a more complex story beyond frontal control. Van Hoeck and colleagues&#8217; landmark fMRI study revealed that counterfactual thinking engages a distributed network that hijacks the brain&#8217;s episodic memory system.</p>
<p>When participants imagined &#8220;upward counterfactuals&#8221; (better outcomes for negative past events), their brains activated the same core memory network used for remembering the past and imagining the future: hippocampus, posterior cingulate, inferior parietal lobule, lateral temporal cortices, and medial prefrontal cortex.</p>
<p>What makes counterfactual reasoning computationally expensive becomes clear in this neural architecture. Counterfactual thinking recruited these memory regions more extensively than episodic past or future thinking, and additionally engaged bilateral inferior parietal lobe and posterior medial frontal cortex.</p>
<p>The extra brain activity reflects just how demanding this kind of mental juggling really is: counterfactual reasoning requires simultaneously maintaining factual and contrafactual representations while actively inhibiting the dominant factual reality.</p>
<p>The brain has evolved specialized circuitry for tracking &#8220;what might have been.&#8221; Boorman and colleagues discovered that lateral frontopolar cortex, dorsomedial frontal cortex, and posteromedial cortex form a dedicated network for encoding counterfactual choice values: tracking not just what happened, but whether alternative options might be worth choosing in the future.</p>
<p>This network operates in parallel to the ventromedial prefrontal system that tracks the value of chosen options, suggesting that the brain maintains separate computational channels for factual and counterfactual value processing.</p>
<p>Perhaps most remarkably, recent work has shown that counterfactual information fundamentally transforms how the brain codes value itself. When counterfactual outcomes are available, medial prefrontal and cingulate cortex shift from absolute to relative value coding.</p>
<p>Think of it this way: losing $10 feels terrible if you could have won $50, but feels great if you could have lost $100. The same neural outcome is processed as positive in a loss context (absence of punishment) but negative in a gain context (absence of reward).</p>
<p>This neural flexibility mirrors the adaptive computational strategies revealed in behavioral studies: the brain dynamically reconfigures its representational schemes based on available information and processing constraints.</p>
<p>These findings illuminate why counterfactual reasoning is both computationally expensive and evolutionarily preserved. The enhanced neural demands reflect genuine computational costs: maintaining multiple alternative representations, binding novel scenario elements, and managing conflict between factual and counterfactual worlds. Yet this system enables the kind of flexible, context-sensitive reasoning that allows humans to learn from paths not taken and adapt behavior based on imagined alternatives.</p>
<h2>The Bounded Rationality Renaissance</h2>
<p>These discoveries are part of a broader renaissance in understanding <strong>bounded rationality</strong>, the idea that intelligent behavior emerges not from perfect optimization, but from smart adaptations to computational limitations.</p>
<p>Herbert Simon&#8217;s revolutionary concept of bounded rationality challenged the assumptions of perfect rationality in classical economic theory, proposing instead that individuals &#8220;satisfice&#8221; (seeking good enough solutions rather than optimal ones) due to limitations in computation, time, information, and cognitive resources.</p>
<p>Simon&#8217;s work recognized that &#8220;perfectly rational decisions are often not feasible in practice because of the intractability of natural decision problems and the finite computational resources available for making them.&#8221; This insight has profound implications for both understanding human cognition and designing artificial intelligence systems.</p>
<h3>The Bigger Picture</h3>
<p>The Ramadan study reveals something profound: the cognitive strategies we think of as distinct (hierarchical reasoning, counterfactual thinking, simple optimization) actually lie along a continuum. Human intelligence dynamically shifts between these approaches based on available mental resources and task demands.</p>
<p>This has implications beyond neuroscience. If counterfactual reasoning emerges from computational constraints rather than being hardwired, it suggests these &#8220;what if&#8221; processes might be fundamental to any sufficiently complex intelligence, biological or artificial.</p>
<h2>Clinical Frontiers: When Counterfactuals Break Down</h2>
<p>From a clinical perspective, this research offers new windows into psychiatric and neurological conditions. Counterfactual reasoning depends on integrative networks for affective processing, mental simulation, and cognitive control. These are systems that are systematically altered in psychiatric illness and neurological disease.</p>
<p>Consider a patient with OCD who gets trapped in endless loops of &#8220;what if I didn&#8217;t check the door?&#8221; or someone with depression whose counterfactual thinking spirals into &#8220;if only I were different, everything would be better.&#8221; Understanding the computational basis of these patterns could lead to more targeted therapeutic approaches.</p>
<p>Patients with schizophrenia show specific deficits in counterfactual reasoning when complex non-factual elements are needed to understand social environments. By mapping how these computational processes break down, we&#8217;re gaining new tools for both diagnosis and treatment.</p>
<h2>The Bottom Line: Constraints as Features</h2>
<p>The story of counterfactual reasoning is a story about the power of constraints. What initially appears to be a computational limitation (our inability to process all information in parallel) turns out to be the very foundation of human cognitive flexibility.</p>
<p>The human brain&#8217;s &#8220;what if&#8221; engine represents an elegant solution that emerges from the interplay between computational constraints and adaptive intelligence. As we stand on the brink of artificial general intelligence, perhaps the secret lies not in building systems that can process everything at once, but systems that can gracefully adapt to the fundamental constraints that shape all intelligence.</p>
<p>The future of AI may not lie in eliminating human limitations, but in understanding why those limitations exist and what remarkable capabilities they make possible.</p>
<hr />
<p><em>This convergence of neuroscience, cognitive science, and AI represents a fundamental shift in how we understand intelligence. Rather than seeing computational constraints as problems to solve, we&#8217;re beginning to recognize them as the very features that make flexible, adaptive intelligence possible. The brain&#8217;s &#8220;what if&#8221; engine may be a blueprint for the next generation of truly intelligent machines.</em></p>
<p>The next time you wonder what might have been, remember: that question may be the very core of what makes you human.</p>
<hr />
<h2>Bibliography</h2>
<p>Boorman, E. D., Behrens, T. E., &amp; Rushworth, M. F. (2011). Counterfactual choice and learning in a neural network centered on human lateral frontopolar cortex. <em>PLoS Biology</em>, 9(6), e1001093.</p>
<p>Pischedda, D., Palminteri, S., &amp; Coricelli, G. (2020). The effect of counterfactual information on outcome value coding in medial prefrontal and cingulate cortex: From an absolute to a relative neural code. <em>Journal of Neuroscience</em>, 40(16), 3268-3277.</p>
<p>Ramadan, M., Tang, C., Watters, N., &amp; Jazayeri, M. (2025). Computational basis of hierarchical and counterfactual information processing. <em>Nature Human Behaviour</em>. doi:10.1038/s41562-025-02232-3.</p>
<p>Simon, H. A. (1955). A behavioral model of rational choice. <em>Quarterly Journal of Economics</em>, 69(1), 99-118.</p>
<p>Van Hoeck, N., Ma, N., Ampe, L., Baetens, K., Vandekerckhove, M., &amp; Van Overwalle, F. (2013). Counterfactual thinking: An fMRI study on changing the past for a better future. <em>Social Cognitive and Affective Neuroscience</em>, 8(5), 556-564.</p>
<p>Van Hoeck, N., Watson, P. D., &amp; Barbey, A. K. (2015). Cognitive neuroscience of human counterfactual reasoning. <em>Frontiers in Human Neuroscience</em>, 9, 420.</p>
<p>Zador, A., Escola, S., Richards, B., et al. (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. <em>Nature Communications</em>, 14, 1597.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Next Chapter of AI: Leveraging the Evolutionary Principles Powering Human Intelligence</title>
		<link>https://michaelhalassa.net/neuroai2025/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Thu, 17 Jul 2025 09:04:13 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Cognitive Processing]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Brain scientist]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[Recurrent Neural Networks]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=772</guid>

					<description><![CDATA[Michael Halassa explores the intersection between Neuroscience and AI (NeuroAI) highlighting research on flexible cognition.]]></description>
										<content:encoded><![CDATA[<p>A mouse can explore a new environment, find food and adapt when the rules change, all using less energy than a lightbulb. Meanwhile, our most powerful computers can solve chess and master protein folding, but still can’t walk across a messy room without crashing into a chair.</p>
<p>This contrast reveals something profound about intelligence itself and where we need to go next. As we celebrate Geoffrey Hinton and John Hopfield&#8217;s recent Nobel Prize in Physics for their foundational work on neural networks, it&#8217;s the perfect time to ask: what&#8217;s the next chapter in understanding intelligence?</p>
<p><strong>The Great Intelligence Paradox</strong></p>
<p>We&#8217;re living through what some call the &#8220;Great Intelligence Paradox.&#8221; Our most advanced computational systems can master protein folding and beat world champions at Go, tasks that require incredible sophistication. But they&#8217;re surprisingly brittle when faced with the kind of flexible, real-world intelligence that any animal takes for granted.</p>
<p>Consider this: no machine can build a nest, forage for berries, or care for young. Today&#8217;s computational systems cannot compete with the sensorimotor capabilities of a four-year old child or even simple animals. The reason isn&#8217;t that we lack computational power. It&#8217;s that we&#8217;ve been approaching intelligence from a different angle.</p>
<p>As researcher Hans Moravec put it, abstract thought &#8220;is a new trick, perhaps less than 100 thousand years old, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge.&#8221; In other words, when trying to capture natural intelligence, we&#8217;ve been focusing on the penthouse without first understanding the foundation.</p>
<p><strong>The Deep History of NeuroAI: A 70-Year Symbiosis</strong></p>
<p>This realization has sparked the emergence of NeuroAI, a field that recognizes something remarkable: evolution has already solved many of the problems we&#8217;re struggling with in artificial intelligence. But the connection between neuroscience and computing isn&#8217;t new. It can be traced to the very foundations of modern computer science itself.</p>
<p>John von Neumann&#8217;s seminal 1945 report outlining the first computer architecture (EDVAC) dedicated an entire chapter to discussing whether the proposed system was sufficiently brain-like. Remarkably, the only citation in this foundational document was to Warren McCulloch and Walter Pitts&#8217; 1943 paper, widely considered the first work on neural networks. This early cross-pollination between neuroscience and computer science set the stage for decades of mutual inspiration.</p>
<p>The relationship deepened with Frank Rosenblatt&#8217;s introduction of the perceptron in 1958. The revolutionary idea here wasn&#8217;t just that machines could learn, but that they should learn from data rather than being explicitly programmed. Rosenblatt established synaptic connections as the primary locus of learning in artificial neural networks, a concept heavily influenced by Donald Hebb&#8217;s 1949 work highlighting the importance of the synapse as the physical basis of learning and memory.</p>
<p>This neuroscience-inspired principle that synapses are the plastic elements of neural networks has remained absolutely central to modern computation. Even when Marvin Minsky and Seymour Papert&#8217;s 1969 critique of perceptrons triggered the first &#8220;neural network winter,&#8221; the core insight persisted.</p>
<p>The symbiosis between artificial and biological neural network research has produced numerous breakthrough success stories. Perhaps the most celebrated is the convolutional neural network (CNN), which powers many of today&#8217;s most successful artificial vision systems. CNNs were directly inspired by David Hubel and Torsten Wiesel&#8217;s model of the visual cortex, work that earned them a Nobel Prize more than four decades ago.</p>
<p>Another home run is reinforcement learning, which has driven groundbreaking achievements including Google DeepMind&#8217;s AlphaZero and AlphaGo. The computational principles underlying these systems mirror the dopamine-mediated learning circuits in biological brains. When a monkey reaches for a reward and receives more than expected, dopamine neurons fire in patterns that precisely match the temporal difference learning algorithms used in these game-playing systems.</p>
<p>More recently, the concept of &#8220;dropout&#8221; has gained prominence in artificial neural networks. This technique, in which individual neurons are randomly deactivated during training to prevent overfitting, draws inspiration from the brain&#8217;s use of stochastic processes. By mimicking the occasional misfiring of neurons, dropout encourages networks to develop more robust and resilient representations.</p>
<p>Critically, this relationship is truly mutualistic, not parasitic. Computational advances have revolutionized neuroscience as much as neuroscience has inspired computation. Artificial neural networks now form the backbone of state-of-the-art models of the visual cortex. The success of these models in solving complex perceptual tasks has generated new hypotheses about how biological brains might perform similar computations.</p>
<p><strong>Why Animals Are the Ultimate Intelligence Teachers</strong></p>
<p>Instead of trying to replicate what makes humans special, we should look at what makes all animals successful. These are the capabilities that have been tested and refined over 500 million years of evolution.</p>
<p>This is where Tony Zador and his colleagues propose the &#8220;embodied Turing test.&#8221; The idea is straightforward but profound: instead of asking whether computation can fool us in conversation, we should ask whether an artificial beaver can build a dam as skillfully as a real one, or whether an artificial squirrel can navigate through trees with the same agility.</p>
<p>This shift in perspective reveals three crucial capabilities that current computational systems lack:</p>
<p><strong>They Engage Their Environment</strong></p>
<p>The defining feature of animals is their ability to move around and interact with their environment in purposeful ways. It&#8217;s about understanding how actions affect the world and using that understanding to achieve goals.</p>
<p>Consider the computational challenge this represents. When you watch a cat stalking prey, you&#8217;re witnessing real-time integration of visual tracking, motor prediction, uncertainty estimation, and action selection. The cat must predict the prey&#8217;s trajectory, estimate the optimal interception point, account for its own motor delays, and continuously update its strategy as the situation evolves. This requires what computational scientists call forward models, inverse models, and optimal control, all running simultaneously in a brain that weighs 30 grams.</p>
<p>Or take nest building in birds. A Baltimore oriole weaves together hundreds of individual grass fibers, each requiring precise motor control and spatial reasoning. The bird must estimate structural integrity in real-time, adapt to varying material properties, and maintain a global architectural plan while executing thousands of local actions. No current robotic system can approach this level of sensorimotor sophistication.</p>
<p><strong>They Behave Flexibly</strong></p>
<p>Animals are born with most of the skills needed to thrive or can rapidly acquire them from limited experience, thanks to their strong foundation in real-world interaction, courtesy of evolution and development. Unlike computational systems that catastrophically fail when encountering scenarios outside their training data, animals excel at handling novel situations by drawing on their general understanding of how the world works.</p>
<p>This flexibility emerges from what neuroscientists call compositional representation. Rather than memorizing specific stimulus-response patterns, animals build internal models of causal structure that can be recombined in novel ways. A squirrel encountering an unfamiliar tree can still navigate it by applying general principles of branch mechanics, gravity, and momentum.</p>
<p>Recent work by Rajalingham and colleagues has provided a striking demonstration of this principle. They trained monkeys to play &#8220;mental Pong,&#8221; where a ball disappeared behind a barrier and the animal had to predict where it would emerge. Neural recordings from the monkeys&#8217; frontal cortex revealed that the brain was running a mental physics engine, maintaining an internal trajectory that matched physical reality even when the ball was invisible.</p>
<p>Even more remarkably, when computational systems were trained on the same task but required to infer the ball&#8217;s hidden path, they produced patterns of activity that mirrored the monkey frontal cortex. This suggests that both biological and artificial systems converge on similar computational solutions when solving similar problems, but biological systems achieve this with far greater efficiency and flexibility.</p>
<p><strong>They Compute Efficiently</strong></p>
<p>Here&#8217;s a staggering comparison that reveals the depth of the efficiency gap: training a large language model such as GPT-3 requires over 1000 megawatt-hours, enough electricity to power a small town for a day. The human brain uses about 20 watts, roughly the same as a bright light bulb.</p>
<p>This efficiency gap points to fundamentally different computational principles. Biological circuits operate in a regime where spikes are sparse and energy-efficient, using asynchronous communication protocols that bear little resemblance to the synchronous, dense matrix operations that characterize current computational systems.</p>
<p>The brain achieves this efficiency through several key innovations. First, it uses event-driven computation, where neurons only consume energy when they have something important to communicate. Second, it employs local learning rules that don&#8217;t require global coordination or backpropagation of error signals. Third, it multiplexes different types of information in the same circuits, allowing the same neural hardware to support multiple functions depending on context.</p>
<p>Recent advances in neuromorphic engineering are beginning to capture some of these principles. Intel&#8217;s Loihi chip and IBM&#8217;s TrueNorth processor implement spiking neural networks that dramatically reduce power consumption for certain tasks. But we&#8217;re still far from achieving the full computational elegance of biological systems.</p>
<p><strong>Our Research: Natural Architectures for Cognitive Flexibility</strong></p>
<p>This broader NeuroAI vision connects directly to collaborative research efforts my colleagues and I have been pursuing through the Thalamus Conte Center at Princeton. Working alongside talented investigators, we&#8217;ve been studying how thalamic circuits, particularly the mediodorsal thalamus, regulate uncertainty and cognitive flexibility.</p>
<p>The thalamus has long been thought of as a simple relay station, passively transferring information between brain regions. Our work reveals a far more sophisticated picture: the thalamus acts as a regulator of cortical representations, actively regulating the flow of information based on context, confidence, and computational demands.</p>
<p>Recent findings show that the mediodorsal thalamus exhibits distinct coding properties from prefrontal cortex. While prefrontal areas represent information in high-dimensional, mixed formats that can support many different behaviors, the thalamus compresses this information into lower-dimensional representations focused on key contextual variables like task rules and uncertainty estimates.</p>
<p>This architectural arrangement resembles what computational scientists call &#8220;regularization,&#8221; where a system constrains its processing to focus on the most relevant dimensions of a problem. The thalamus appears to provide this kind of regularization to prefrontal networks, helping them avoid getting lost in irrelevant details while maintaining the flexibility to handle novel situations.</p>
<p>This has direct implications for understanding psychiatric disorders. Schizophrenia, for instance, involves difficulties with cognitive flexibility and context processing. Our work suggests that these may reflect specific disruptions in thalamic computation rather than global deficits in learning or reasoning.</p>
<p>Understanding how evolution solved the uncertainty problem in biological brains could be the key to creating computational systems that are truly adaptive and robust in the face of novel situations. Current systems struggle precisely because they lack principled ways to handle uncertainty and adjust their confidence based on context.</p>
<p><strong>The Road Ahead: From Lab to Life</strong></p>
<p>The implications of this NeuroAI approach extend far beyond academic laboratories. The convergence of insights from biological intelligence and computational innovation points toward systems that could:</p>
<p><strong>Adapt like animals</strong>: Robots that learn to navigate new environments with the flexibility of a mouse exploring new territory. Imagine search and rescue robots that can adapt to novel disaster scenarios, or autonomous vehicles that can handle completely unprecedented road conditions by drawing on fundamental principles of navigation and obstacle avoidance rather than memorized patterns.</p>
<p><strong>Learn efficiently</strong>: Systems that acquire new skills from limited examples, like how animals quickly adapt to new food sources or threats. A key insight from biological learning is the importance of strong inductive biases, the built-in assumptions that help guide learning in the right direction. Animals don&#8217;t start from scratch; they leverage millions of years of evolutionary optimization.</p>
<p><strong>Handle uncertainty gracefully</strong>: Systems that know when they don&#8217;t know, actively seeking information to improve their decisions rather than confidently making wrong choices. This requires implementing something like the thalamic uncertainty computation we&#8217;ve been studying, a principled way to calibrate confidence and adjust exploration strategies based on current knowledge state.</p>
<p><strong>Integrate seamlessly</strong>: Computation that works alongside humans as naturally as animals coordinate in flocks or herds. This requires understanding not just individual intelligence but collective intelligence, how multiple agents can share information and coordinate actions without centralized control.</p>
<p>Recent experimental work provides concrete examples of how these principles might be implemented. Researchers at DeepMind have developed systems that can learn to play multiple Atari games using the same general-purpose algorithm, rather than requiring game-specific training. Their success comes from incorporating biological principles like replay (reactivating and reorganizing memories during rest) and curiosity-driven exploration.</p>
<p>Similarly, researchers at OpenAI have shown that large language models can exhibit emergent reasoning capabilities when scaled up, suggesting that some aspects of flexible intelligence might emerge from sufficient computational scale combined with appropriate architectural principles.</p>
<p>But perhaps the most promising developments come from robotics, where researchers are beginning to implement embodied learning principles. Boston Dynamics&#8217; robots can navigate complex terrain and recover from perturbations in ways that would have been impossible just a few years ago. Their success comes from combining traditional control theory with machine learning approaches that can adapt to novel situations.</p>
<p><strong>A New Kind of Intelligence</strong></p>
<p>Building models that can pass the embodied Turing test requires more than tweaking existing algorithms. As Zador and colleagues argue, we need a &#8220;large-scale effort to identify and understand the principles of biological intelligence and abstract those for application in computer and robotic systems.&#8221;</p>
<p>Two key insights emerge from this challenge. First, intelligence isn&#8217;t about building internal representations—it&#8217;s about discovering affordances, the opportunities for action that emerge from the interaction between an agent and its environment. Second, animals don&#8217;t just learn; they develop, with their learning capabilities changing over time. Understanding how biological systems bootstrap from simple reflexes to sophisticated reasoning could transform how we build adaptive computational systems.</p>
<p>The convergence of neuroscience and computation offers concrete opportunities for progress. Animals solve computational problems that current systems struggle with, using principles refined over hundreds of millions of years of evolution. The mouse exploring a maze demonstrates flexible navigation, efficient learning from limited experience, and robust generalization. These capabilities emerge from biological circuits that balance exploration with exploitation, build and update internal maps, and adapt to novel situations.</p>
<p>Progress will require sustained collaboration between neuroscientists, computer scientists, and engineers. The questions are concrete: How do biological systems achieve such efficiency? What computational principles underlie adaptive behavior? How can we implement these in artificial systems?</p>
<p>Want to dive deeper into these ideas? Join us at CNS2025 in Florence, Italy (July 5-9, 2025) for our NeuroAI workshop, where we&#8217;ll explore how the convergence of neuroscience and computation is shaping the future of both fields. More details at cnsorg.org/cns-2025.</p>
<p><strong>References</strong></p>
<p>Zador, A., Escola, S., Richards, B., Ölveczky, B., Bengio, Y., Boahen, K., Botvinick, M., Chklovskii, D., Collins, A., Doya, K., Hassabis, D., Kording, K., Konidaris, G., Marblestone, A., Olshausen, B., Pouget, A., Sejnowski, T., Simoncelli, E., Solla, S., Sussillo, D., Tsao, D., &amp; Tsodyks, M. (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. <em>Nature Communications</em>, 14, 1597. https://doi.org/10.1038/s41467-023-37180-x</p>
<p>Zador, A. (2024). NeuroAI: A field born from the symbiosis between neuroscience and computation. <em>The Transmitter</em>. https://www.thetransmitter.org/neuroai/neuroai-a-field-born-from-the-symbiosis-between-neuroscience-ai/</p>
<p>Rajalingham, R., Sohn, H. &amp; Jazayeri, M. (2025). Dynamic tracking of objects in the macaque dorsomedial frontal cortex. <em>Nature Communications</em>, 16, 346. https://doi.org/10.1038/s41467-024-54688-y</p>
<p>Thalamus Conte Center. (2024). Princeton University. https://conte.thalamus.princeton.edu/</p>
<p>Hubel, D. H., &amp; Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat&#8217;s visual cortex. <em>The Journal of Physiology</em>, 160(1), 106-154.</p>
<p>von Neumann, J. (1945). First Draft of a Report on the EDVAC. University of Pennsylvania.</p>
<p>McCulloch, W. S., &amp; Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. <em>Bulletin of Mathematical Biophysics</em>, 5(4), 115-133.</p>
<p>Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. <em>Psychological Review</em>, 65(6), 386-408.</p>
<p>Hebb, D. O. (1949). <em>The Organization of Behavior: A Neuropsychological Theory</em>. Wiley.</p>
<p>Minsky, M., &amp; Papert, S. (1969). <em>Perceptrons: An Introduction to Computational Geometry</em>. MIT Press.</p>
<p>Moravec, H. (1988). <em>Mind Children: The Future of Robot and Human Intelligence</em>. Harvard University Press.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence</title>
		<link>https://michaelhalassa.net/machines-that-think-like-us/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Wed, 16 Jul 2025 21:07:04 +0000</pubDate>
				<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[Digital Twins]]></category>
		<category><![CDATA[LLMs]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[Recurrent Neural Networks]]></category>
		<category><![CDATA[Transformers]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=763</guid>

					<description><![CDATA[Michael Halassa discusses recent insights from the NeuroAI workshop at the OCNS meeting in Florence 2025]]></description>
										<content:encoded><![CDATA[<p>On July 9th 2025, Z. Sage Chen (NYU) and I organized &#8220;The BRAIN 2.0 NeuroAI&#8221; workshop at the Organization for Computational Neurosciences (OCNS) annual meeting in Florence. The workshop brought together several scientists working at the intersection between Natural and Artificial Intelligence research. The energy was high and one could feel the enthusiasm in the air: for the first time in human history, we have machines that appear to learn, remember, and make decisions in ways that mirror core aspects of human cognition. This creates an unprecedented opportunity: we can now understand our minds by building artificial systems that think and behave like us.</p>
<p>The workshop conversations were bi-directional. In one direction, people asked: what can the strategies and mechanisms of artificial networks tell us about how we function? In another, we collectively asked: can we leverage what we are constantly learning about neuroscience to build better AI? After all, the energy efficiency and flexibility of animal brains is unmatched by state-of-the-art artificial agents.</p>
<div class="subscription-widget-wrap">
<div class="subscription-widget show-subscribe">
<div class="preamble">
<p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p>
</div>
<div class="subscribe-widget is-signed-up is-fully-subscribed" data-component-name="SubscribeWidget">
<div class="pencraft pc-reset button-wrapper">
<div class="pencraft pc-display-flex pc-justifyContent-center pc-reset"></div>
</div>
</div>
</div>
</div>
<p>This represents a remarkable shift from traditional approaches. Instead of studying brains and machines in isolation, we&#8217;re using them to inform each other. The artificial systems we create serve as hypotheses about how intelligence works, hypotheses we can test, modify, and refine in ways that would be impossible with biological systems alone. Throughout the workshop, a fascinating tension emerged: the most accurate models of neural activity may not be the most interpretable or biologically meaningful ones—a fundamental tradeoff that shapes how we understand minds.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!3V0d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!3V0d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!3V0d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919 3696 4ae2 9f60" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3059275,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 5"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Sage (left) and I (right)</p>
<h2 class="header-anchor-post">The Mystery of How the Brain Learns</h2>
<p>The development and application of backpropagation to deep networks created watershed moments that contributed to the modern AI era. While backpropagation itself was developed earlier, Geoff Hinton&#8217;s 2006 work on deep belief networks and especially the 2012 AlexNet breakthrough that won the ImageNet competition marked the real turning points. The backpropagation algorithm allows neural networks to learn by computing how errors in behavioral outputs should change weights throughout the network and has shown remarkable learning efficiency. It works by propagating error signals backward through the network, telling each connection exactly how to adjust to reduce mistakes.</p>
<p>Since then, backpropagation has become the backbone of modern deep learning, powering everything from image recognition to language models. But here&#8217;s the puzzle: this remarkable learning efficiency is likely mirrored by the brain, yet we don&#8217;t know what the analogous biological algorithm is. The brain cannot implement standard backpropagation, as it lacks the requisite backward connectivity and global, vectorized error signals that artificial networks rely on.</p>
<p>Traditional Computational Neuroscience has long proposed Hebbian learning (&#8220;cells that fire together, wire together&#8221;) as the brain&#8217;s learning mechanism. While Hebbian learning is biologically plausible and occurs throughout the nervous system, its classical formulations lack the credit assignment specificity of backpropagation. Hebbian learning can strengthen connections between simultaneously active neurons, but it struggles to determine which specific connections are responsible for errors in complex, multilayered networks. This creates a fundamental gap: the brain needs backpropagation-like credit assignment to learn complex behaviors, but it can only implement local plasticity rules.</p>
<p>This gap was a central theme throughout our workshop, with speakers presenting different pieces of what might be a larger puzzle. Rui Ponte Costa&#8217;s work on different learning mechanisms across neural circuits presents a compelling case for partitioning the credit assignment problem across different substrates distributed throughout the brain. For example, the cortex learns via self-supervised learning but can be influenced by fast predictive subcortical machinery to adjust its representations quickly and flexibly. This parallels some of our own work on thalamocortical interactions, including our longstanding collaboration with Sage Chen&#8217;s lab. Gaspard Olivier presented his PhD work with Rafal Bogacz, showcasing the lab&#8217;s work on predictive learning achieving backpropagation-like performance (under certain conditions) using purely local mechanisms. Nao Uchida demonstrated how distributional reinforcement learning (where the brain represents entire probability distributions of future rewards rather than simple averages) could provide another piece of the credit assignment puzzle. The emerging picture suggests that the brain&#8217;s backpropagation parallel is likely a combination of these mechanisms working together, rather than any single biological algorithm.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!1tvn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 424w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 848w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1272w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="figure 1" src="https://substackcdn.com/image/fetch/$s_!1tvn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!1tvn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 424w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 848w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1272w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1456w" alt="figure 1" width="685" height="294" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:294,&quot;width&quot;:685,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;figure 1&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" /></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>From Lillicrap et al., 2020 Nat Rev Neurosci (Backpropagation and the brain)</p>
<h2 class="header-anchor-post">Rui Costa: Three Pillars of Brain-Like Intelligence</h2>
<p>Rui presented &#8220;three pillars&#8221; of intelligent systems:</p>
<p><strong>1. World Models (Unsupervised Learning)</strong>: The neocortex builds internal models of the world through self-supervised learning. Recent work suggests that local cortical circuits may have evolved specifically to support this kind of learning, with Layer 2/3 neurons predicting future inputs due to processing delays, while Layer 5 neurons integrate predictions from both the thalamus and cortical predictions.</p>
<p><strong>2. Model Fine-tuning (Reinforcement Learning)</strong>: Dopamine adjusts prefrontal cortex activity, thereby fine-tuning the world model that guides learning throughout the brain. This goes beyond classical RL formulations; it&#8217;s a sophisticated meta-learning system.</p>
<p><strong>3. Flexible Behavior</strong>: The cerebellum and hippocampus work together as predictive systems, with the cerebellum providing high-dimensional, fast predictions while the hippocampus offers more compressed, memory-based guidance. Remarkably, this work shows that combining a cerebellum-inspired system with a fixed RNN performs better on zero-shot learning tasks than purely plastic networks. This connects directly to our work: we have built a series of models (Aditya Gilra 2018, Ali Hummos 2022, Wei-Long Zheng 2024) that all rely on a similar mechanism of fixed PFC RNN and a fast subcortical modulator to enable flexibility (and maybe generalization). Importantly, the cerebellum communicates with prefrontal cortex through the mediodorsal thalamus, creating a pathway for rapid, predictive learning that doesn&#8217;t require extensive retraining of cortical circuits.</p>
<p>What makes Costa&#8217;s framework interesting is its grounding in optimization theory. Rather than describing these systems phenomenologically, he&#8217;s showing how they might emerge from the brain&#8217;s need to solve specific computational problems efficiently. His lab has demonstrated that cortical circuits can approximate deep learning algorithms, that the cerebellum enables rapid adaptation through &#8220;feedback decoupling,&#8221; and that cholinergic modulation implements a kind of attention mechanism for learning.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!vVtH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 6" src="https://substackcdn.com/image/fetch/$s_!vVtH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!vVtH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1 4fde 4f9b 8ed3" width="1456" height="975" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:975,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1484751,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec01089e-a0d3-4d3a-84e8-0f323776ca15_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Rui Ponte Costa (Oxford)</p>
<h2 class="header-anchor-post">Tatiana Engel: Digital Twins and Latent Circuit Models</h2>
<p>Tatiana Engel, from Princeton&#8217;s Neuroscience Institute, presented groundbreaking work that spans two critical areas: the challenges of neural &#8220;digital twins&#8221; and her innovative latent circuit model approach.</p>
<p>Her work on &#8220;digital twins&#8221; (RNNs trained to reproduce neural population dynamics) revealed a fundamental limitation in how we think about brain models. When Engel&#8217;s team trained RNNs to match neural activity patterns, they found that these &#8220;twin&#8221; networks could reproduce the data beautifully. But when they tried to use these twins to predict the effects of neural perturbations, the results were not awesome, to put it mildly. Different twin networks that matched the same data equally well made completely different predictions about how the brain would respond to interventions.</p>
<p>This failure isn&#8217;t just a technical problem. It reveals something deep about the nature of biological intelligence. The brain operates in a low-dimensional space of meaningful solutions, while artificial networks explore the full high-dimensional space of possible solutions. Even when they converge on the same behavior, they&#8217;re often implementing completely different computational strategies.</p>
<p>Engel&#8217;s solution is elegant: instead of training twins to match all neural activity, train them to capture the essential low-dimensional structure that actually matters for computation. This &#8220;latent circuit&#8221; approach trades some descriptive accuracy for genuine predictive power. Her latent circuit model is a dimensionality reduction approach in which task variables interact via low-dimensional recurrent connectivity to produce behavioral output. Unlike traditional correlation-based dimensionality reduction methods, the latent circuit model incorporates recurrent interactions among task variables to implement the computations necessary to solve the task.</p>
<p>Crucially, Engel demonstrated that when you constrain RNNs to have fewer neurons, forcing them into lower-dimensional regimes, something remarkable happens: they begin to show more structured, interpretable behavior. However, this improvement in interpretability and biological plausibility comes with a tradeoff—there&#8217;s a reduction in their ability to perfectly match the complex, high-dimensional neural activity patterns. This finding highlights a fundamental tension in computational neuroscience: the most accurate models of neural activity may not be the most interpretable or biologically meaningful ones.</p>
<p>When applied to recurrent neural networks trained on context-dependent decision-making tasks, her latent circuit model revealed a suppression mechanism in which contextual representations inhibit irrelevant sensory responses. Most remarkably, when she applied the same method to prefrontal cortex recordings from monkeys performing the same task, they found similar suppression of irrelevant sensory responses—contrasting sharply with previous analyses using correlation-based methods that had found no such suppression.</p>
<p>The key insight is that dimensionality reduction methods that do not incorporate causal interactions among task variables are biased toward uncovering behaviorally irrelevant representations. Engel&#8217;s work demonstrates that incorporating the recurrent interactions that implement task computations is essential for identifying the neural mechanisms that actually drive behavior.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!Cwf7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 7" src="https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!Cwf7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3 c570 4825 8263" width="1456" height="931" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:931,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1929120,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd84a89f3-4e86-428c-a6b7-b414bbb88a2c_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Tatiana Engel (Princeton)</p>
<h2 class="header-anchor-post">Miller&#8217;s Hybrid Models: Bridging AI and Classical Cognition</h2>
<p>Kevin Miller from DeepMind&#8217;s Neuroscience Lab presented work on hybrid neural-cognitive models that bridge classical cognitive frameworks with modern machine learning. Miller&#8217;s approach combines the interpretability of traditional cognitive models with the predictive power of neural networks, creating systems that can both explain and predict behavior.</p>
<p>His work addresses a fundamental challenge in computational cognitive science: classical cognitive models are interpretable but often limited in their predictive accuracy, while neural networks can achieve high performance but remain black boxes. Miller&#8217;s hybrid RNNs and disentangled architectures attempt to get the best of both worlds, maintaining the transparency needed for scientific understanding while achieving the performance necessary for practical applications.</p>
<p>The implications extend beyond just better models. As Miller noted, there may be an inherent tension between complexity and interpretability that reflects something fundamental about how we communicate and reason about intelligent systems. This connects to broader questions about whether the most accurate models of cognition are necessarily the ones we can understand and explain to others.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!zwJ3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!zwJ3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09 1b92 4b6a 984b" width="3965" height="2510" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2510,&quot;width&quot;:3965,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1977782,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68973274-5d15-4141-803c-549a6d4ebe5c_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 8"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Kevin Miller (Google Deep Mind)</p>
<h2 class="header-anchor-post">Sen Song: Hierarchical Reasoning Models</h2>
<p>Sen Song presented compelling work on the Sapient project&#8217;s Hierarchical Reasoning Model (HRM), a novel recurrent architecture that challenges conventional approaches to AI reasoning. What makes this work particularly compelling is its demonstration that recurrent transformers can achieve sophisticated reasoning capabilities that current large language models struggle with.</p>
<p>The HRM operates through two interdependent recurrent modules: a high-level module responsible for slow, abstract planning, and a low-level module handling rapid, detailed computations. This architecture is directly inspired by hierarchical processing in the brain, where different cortical areas operate at distinct timescales (slow theta waves for high-level planning and fast gamma waves for detailed processing).</p>
<p>What&#8217;s remarkable is the efficiency: with only about 1000 training examples, the HRM (~27M parameters) outperforms much larger Chain-of-Thought models on challenging benchmarks like the Abstraction and Reasoning Corpus (ARC), Sudoku-Extreme, and complex maze navigation tasks. The model solves these tasks directly from inputs without requiring explicit chain-of-thought supervision.</p>
<p>This work suggests something profound about the future of AI architecture. By introducing recurrence back into transformers, we might finally achieve what&#8217;s been missing in current LLMs: spontaneous activity and genuine thought-like processes. As Song noted, the recurrent dynamics could enable the kind of internal mental simulation that characterizes real reasoning, rather than just sophisticated pattern matching.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!6mf7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!6mf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!6mf7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29 dfaa 47d3 b725" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2461162,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 9"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Sen Song (Tsighua Univerity) presenting Sapient</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!JywD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JywD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 10" src="https://substackcdn.com/image/fetch/$s_!JywD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!JywD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JywD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9 001a 4530 9061" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3261676,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Panel Discussion 1 (left to right: Song, Engel, Ponte Costa, Miller)</p>
<h2 class="header-anchor-post">Dan Levenstein: NeuroAI as Theory Development</h2>
<p>Dan Levenstein&#8217;s work may be among the clearest examples of how AI modeling can generate new theories about core brain functions like memory and planning. Dan presented his work with Blake Richards and Adrien Peyrache, which represents concrete progress in understanding how biological neural networks might actually implement sophisticated learning algorithms. Most significant is Dan&#8217;s recent bioRxiv paper with Peyrache and Richards on &#8220;Sequential predictive learning is a unifying theory for hippocampal representation and replay&#8221;. This work addresses one of the most fundamental questions in neuroscience: how does the hippocampus both form cognitive maps and generate the offline &#8220;replay&#8221; sequences that support memory consolidation and planning?</p>
<p>The breakthrough comes from training recurrent neural networks to predict egocentric sensory inputs as an agent moves through simulated environments. Levenstein and colleagues found that spatially tuned cells emerge from all forms of predictive learning, but offline replay only emerges when networks use recurrent connections and head-direction information to predict multi-step observation sequences. This promotes the formation of a continuous attractor that reflects the geometry of the environment (essentially a neural cognitive map).</p>
<p>What&#8217;s remarkable is that these offline trajectories showed wake-like statistics, autonomously replayed recently experienced locations, and could be directed by a virtual head direction signal. Networks trained to make cyclical predictions of future observation sequences were able to rapidly learn cognitive maps and produced sweeping representations of future positions reminiscent of hippocampal theta sweeps.</p>
<p>This work suggests that hippocampal theta sequences reflect a circuit that implements a data-efficient algorithm for sequential predictive learning. The framework provides a unifying theory that connects spatial representation, memory replay, and theta sequences under a single computational principle: the brain&#8217;s drive to predict future sensory experiences.</p>
<p>What makes this work particularly significant is that it doesn&#8217;t require exotic new mechanisms. It leverages well-known properties of dendrites, synapses, and synaptic plasticity that already exist in cortical circuits. The burst-dependent plasticity rule essentially allows the brain to implement a form of top-down credit assignment that rivals artificial backpropagation algorithms, but using purely local, biologically plausible mechanisms.</p>
<p>This represents exactly the kind of theory development that NeuroAI enables: using optimization principles and machine learning insights to understand how evolution might have solved fundamental computational problems in neural circuits.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!VPBj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 11" src="https://substackcdn.com/image/fetch/$s_!VPBj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!VPBj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5 4207 4563 b6b6" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1756874,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Couldn’t find a picture of Dan— but this slide easily wins the coolest slide award of the workshop (Dan is starting his lab at Yale in August!)</p>
<h2 class="header-anchor-post">Gaspard Olivier: Predictive Learning Frameworks</h2>
<p>Gaspard Olivier presented work from Rafal Bogacz&#8217;s recent Nature Neuroscience paper &#8220;Inferring neural activity before plasticity as a foundation for learning beyond backpropagation.&#8221; Olivier&#8217;s approach centers on the concept of energy machines (physical mechanical analogies that provide an intuitive understanding of how energy-based networks achieve sophisticated learning).</p>
<p>The key insight from Bogacz&#8217;s work, which Olivier built upon, is the principle of &#8220;prospective configuration.&#8221; Unlike backpropagation, which modifies weights first and then observes the resulting change in neural activity, prospective configuration works in reverse: neural activity changes first to match the desired output, and then synaptic weights are modified to consolidate this prospective activity pattern.</p>
<p>Olivier&#8217;s energy machine framework visualizes this process elegantly. In these mechanical systems, neural activity corresponds to the vertical position of nodes sliding on posts, synaptic connections correspond to rods connecting the nodes, and the energy function corresponds to the elastic potential energy of springs. When the system &#8220;relaxes&#8221; by minimizing energy, it naturally settles into the prospective configuration (the neural activity pattern that the network should produce after learning).</p>
<p>This framework solves a fundamental problem in biological learning: how to implement credit assignment without the precise backward information flow that backpropagation requires. As Olivier demonstrated, the relaxation process in energy-based networks inherently &#8220;foresees&#8221; the effects of potential weight changes and compensates for them dynamically, avoiding the catastrophic interference that plagues backpropagation.</p>
<p>The practical implications are profound. Olivier&#8217;s work shows that energy-based learning can outperform backpropagation in biologically relevant scenarios like online learning, continual learning across multiple tasks, and learning with limited data (precisely the challenges that biological systems face). The energy machine framework provides both the theoretical foundation and the intuitive understanding for why evolution might have favored such learning mechanisms over more direct optimization approaches.</p>
<p>This represents a crucial piece of the credit assignment puzzle, demonstrating how the brain might implement sophisticated learning algorithms through local, energy-based computations that are both biologically plausible and computationally superior to artificial alternatives.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!DBon!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 424w, https://substackcdn.com/image/fetch/$s_!DBon!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 848w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1272w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Inferring neural activity before plasticity as a foundation for learning beyond backpropagation | Nature Neuroscience" src="https://substackcdn.com/image/fetch/$s_!DBon!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!DBon!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 424w, https://substackcdn.com/image/fetch/$s_!DBon!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 848w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1272w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1456w" alt="Inferring neural activity before plasticity as a foundation for learning beyond backpropagation | Nature Neuroscience" width="1456" height="631" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:631,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Inferring neural activity before plasticity as a foundation for learning beyond backpropagation | Nature Neuroscience&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" /></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Couldn’t find a picture of Gaspard— so this is the cool figure from the paper and the framework he discussed</p>
<h2 class="header-anchor-post">Nao Uchida: Distributional Reinforcement Learning</h2>
<p>Nao Uchida&#8217;s presentation began with a fascinating origin story about how DeepMind reached out to him following the groundbreaking success of Deep Q-Networks (DQN). The 2015 Nature paper &#8220;Human-level control through deep reinforcement learning&#8221; had demonstrated that artificial agents could learn to play Atari games directly from pixel inputs, achieving superhuman performance across dozens of games. This breakthrough sparked intense interest in understanding whether similar computational principles might be operating in biological brains.</p>
<p>DeepMind&#8217;s collaboration with Uchida led to the landmark 2020 Nature paper &#8220;A distributional code for value in dopamine-based reinforcement learning&#8221; by Dabney, Kurth-Nelson, Uchida, and colleagues. This work revolutionized our understanding of how the brain represents value and reward. Rather than encoding just the mean expected reward (as traditional reinforcement learning theory suggested), Uchida&#8217;s team discovered that dopamine neurons encode entire probability distributions of future rewards.</p>
<p>The key insight was that different dopamine neurons have different &#8220;expectile codes&#8221;: some neurons are optimistic (responding more to positive prediction errors), others are pessimistic (responding more to negative prediction errors), and still others fall somewhere in between. This diversity in dopamine neuron responses, which had long puzzled neuroscientists, suddenly made computational sense: the brain wasn&#8217;t just learning average rewards, but was representing the full uncertainty and variability of future outcomes.</p>
<p>This distributional approach explains why dopamine neurons show such heterogeneous responses to the same stimuli. Rather than being noise or biological messiness, this diversity serves a crucial computational function. It allows the brain to represent not just &#8220;how much reward am I likely to get?&#8221; but &#8220;what&#8217;s the full range of possible rewards, and how likely is each outcome?&#8221;</p>
<p>Uchida then pivoted to discuss his most recent Nature paper with Alexandre Pouget: &#8220;Multi-timescale reinforcement learning in the brain.&#8221; This work revealed another fundamental aspect of dopamine diversity: different dopamine neurons operate with different discount factors from temporal difference (TD) learning algorithms. Some neurons focus on immediate rewards (high discount factors), while others consider longer-term consequences (low discount factors).</p>
<p>This discovery provides a neurobiological foundation for why humans and animals can balance immediate gratification with long-term planning. Instead of having a single, universal discount factor, the brain maintains a population of neurons with different temporal horizons, allowing for more flexible and adaptive decision-making.</p>
<p>What makes Uchida&#8217;s work particularly compelling in the NeuroAI context is that it represents systems neuroscience at its most computationally rigorous. Rather than simply describing what neurons do, his approach tests specific algorithmic hypotheses about how neural circuits implement learning and decision-making. This is precisely what systems neuroscience in the AI age should be: using computational theories to generate testable predictions about neural mechanisms, then using the results to refine both our understanding of the brain and our artificial intelligence algorithms.</p>
<p>The broader implications are profound: if the brain implements distributional reinforcement learning with multiple timescales, this suggests that current AI systems (which typically use single discount factors and mean-based value representations) are missing crucial computational advantages that biological systems have evolved. Understanding these biological algorithms could lead to more robust, adaptive, and efficient artificial intelligence systems.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!6x9k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!6x9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!6x9k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7 0788 4817 97df" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2720742,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 12"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Nao Uchida (Harvard)</p>
<h2 class="header-anchor-post">Looking Forward: Intelligence as Optimization</h2>
<p>What made Florence special wasn&#8217;t just the individual insights, but their convergence around a central theme: machines are becoming our best models for understanding minds not because they copy neural architecture, but because they solve the same optimization problems that evolution has been working on for millions of years.</p>
<p>Perhaps the most profound insight was recognizing a fundamental trade-off that shapes both artificial intelligence and human cognition: the tension between predictive power and interpretability. Recent work has shown that tiny recurrent neural networks, sometimes with just 1-4 units, can outperform much larger networks at predicting animal behavior. Yet the models that best predict behavior are often the hardest to understand, while the models we can understand often predict behavior poorly.</p>
<p>This &#8220;interpretability paradox&#8221; might be fundamental to how minds work. When you explain your decision-making process to a friend, you&#8217;re not giving them access to your neural network weights. You&#8217;re constructing a simplified, interpretable model that captures the essential logic while losing the messy details. Evolution may have equipped us with simple, communicable heuristics precisely because they&#8217;re interpretable, even though more complex processes actually drive our behavior.</p>
<p>Whether it&#8217;s Uchida&#8217;s work revealing how dopamine neurons encode probability distributions of rewards, Dan&#8217;s demonstrations that predictive learning can unify spatial representation and replay, or Costa&#8217;s three-pillar architecture showing how world models emerge from optimization principles, the common thread is that intelligence arises from solving computational problems efficiently under biological constraints.</p>
<p>This represents a profound shift in how we think about the relationship between brains and machines. We&#8217;re not trying to build artificial brains. We&#8217;re trying to understand the computational principles that both biological and artificial systems must discover to be intelligent. Instead of asking &#8220;How does the brain work?&#8221; researchers are asking &#8220;What computational problems does intelligence solve, and what are the optimal solutions?&#8221;</p>
<p>The implications extend beyond academic understanding. If biological systems have evolved superior learning algorithms like distributional reinforcement learning, prospective configuration, or hierarchical reasoning models, then incorporating these insights could lead to more sample-efficient, robust, and adaptable artificial intelligence systems.</p>
<p>As the workshop concluded, the participants seemed to recognize they were grappling with fundamental questions about intelligence that will likely shape the next decade of research. The conversation has only just begun, but the direction is becoming clearer: understanding intelligence requires understanding the optimization problems that both biological and artificial systems must solve.</p>
<p>Organizing this workshop with Sage was one of the most intellectually energizing experiences I&#8217;ve had. It reinforced the idea that if we want to understand intelligence—biological or artificial—we need both neurons and networks, both brains and machines.</p>
<p>&nbsp;</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!P2Zb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!P2Zb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e fc95 497d 9d00" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3144041,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 13"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Panel discussion 2 (Left to right: honorary panelist Ken Miller (Columbia), Kevin Miller (Deep Mind), Gaspard Oliviers (Oxford), Dan Levenstein (Montreal—&gt;Yale), Nao Uchida (Harvard), Sen Song (Tsinghua).</p>
<div>
<hr />
</div>
<p><em>The OCNS 2025 NeuroAI workshop took place July 9th, 2025 in Florence, Italy. The insights presented here represent the collective wisdom of dozens of researchers pushing the boundaries of our understanding of intelligence.</em></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Two Prefrontal Streams Evidence for Homology Across Species</title>
		<link>https://michaelhalassa.net/the-two-prefrontal-streams-evidence-for-homology-across-species/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Thu, 12 Sep 2024 03:56:16 +0000</pubDate>
				<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Brain scientist]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[research paper]]></category>
		<category><![CDATA[studies]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=722</guid>

					<description><![CDATA[I have recently had the privilege of writing a book chapter with Bob Vertes and Nicola Palemero-Ghallager on the evolution of the frontal cortex. It was an amazing intellectual journey, where I learned a lot from our interactions. The product was a new hypothesis for how this amazing part of our brain has evolved based [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I have recently had the privilege of writing a book chapter with Bob Vertes and Nicola Palemero-Ghallager on the evolution of the frontal cortex. It was an amazing intellectual journey, where I learned a lot from our interactions. The product was a new hypothesis for how this amazing part of our brain has evolved based on comparative anatomy and function. Please enjoy and reach out with any questions. Any and all input is welcome.</p>
<p><strong>Abstract</strong></p>
<p>The prefrontal cortex (PFC) plays a critical role in human cognition, but the precise mechanisms by which its circuitry accomplishes its proposed functions are unclear. Nonhuman animals are indispensable in revealing such mechanisms, as the ability to monitor and manipulate their circuitry provides necessary insights. A major impediment to linking the growing progress in animal research to insights for human cognition and applications to human health is the lack of consensus on how the PFC is homologous across species. In this perspective, we follow the classifi cation of human PFC into medial and lateral streams, with the medial being primarily evaluative and the lateral being executive. Based on anatomy, physiology and function, we advance the proposal that the rodent medial prefrontal cortex contains elements of both streams, with functional parallels between primate ventromedial and dorsolateral PFC with rodent infralimbic and prelimbic areas, respectively. To support this argument, we highlight the granular nature of the prelimbic cortex in Tupaia belangeri, a basal primate whose PFC macrostructure is rodent-like. Our perspective may help provide additional input to the debate on PFC homology and lead to new testable hypotheses.</p>
<p><strong>Introduction </strong></p>
<p>The prefrontal cortex (PFC) is a complex and highly interconnected region that engages in a wide variety of cognitive functions, including attention, working memory, decision making, and social behavior (Miller and Cohen 2001; Soltani and Koechlin 2022). In the human brain, the PFC has shown great expansion compared to even the closest primate relatives (Preuss and Wise 2022), a process thought to be key to the unparalleled cognitive expansion seen in our species. However, both the principles by which PFC circuits contribute to cognition as well as their origin/emergence are poorly understood.</p>
<p>Nonhuman animal research is poised to help fi ll this knowledge gap because, in addition to its basic scientifi c value, it off ers important insights into human health given the involvement of PFC dysfunction in several neurological and psychiatric illnesses (Liston et al. 2011; Smucny et al. 2022). Given the mechanistic accessibility aff orded by newer monitoring (Tian and Looger 2008; Wu et al. 2022; Xu et al. 2017) and causal tools (Kim et al. 2017; Rabut et al. 2020; Roth 2016), there has been an explosion in PFC animal research over the last decade focused on rodent PFC. Yet despite this progress, it is considerably challenging to relate these advances into insights applicable to understanding the human (and nonhuman primate) PFC given the considerable diff erences in macro- and microarchitecture. Specifi cally, while the human PFC has a large number of well-diff erentiated areas (Haber and Robbins 2022)—von Economo and Koskinas (1925), for example, identifi ed 39 cytoarchitectonically distinct areas on the cortex covering the lateral, medial, and orbital portions of the frontal lobe—the rodent PFC is far less diff erentiated, thus making homology assignments very challenging.</p>
<p>Here, we follow the general two-stream human PFC classifi cation (Domenech and Koechlin 2015) as a starting point. Specifi cally, this functional classifi cation suggests that the lateral stream, which is largely composed of the lateral PFC (lPFC) is involved in executive control and rule-based behavior (Friedman and Robbins 2022). In contrast, the medial stream, which is composed of the ventromedial PFC (vmPFC) and dorsomedial PFC (dmPFC), is involved in adjusting behavioral strategies based on reinforcement and selfmonitoring (Domenech and Koechlin 2015). According to the defi nition of Domenech and Koechlin (2015), the lPFC encompasses Brodmann’s (1909) areas 44 and 45, as well as the lateral portion of areas 8, 9, and 10 (although those authors do not mention areas 46 or 47 which are commonly included in the lateral stream). Their vmPFC covers Brodmann’s areas 11, 12, 14, 25, the medial part of 10, rostral part of 24, and ventral portion of 32, whereas the dmPFC encompasses the caudal and dorsal parts of 24 and 32, respectively, as well as the medial portion of areas 6, 8, and 9.</p>
<p>We present evidence that the rodent medial prefrontal cortex (mPFC) exhibits homology to both streams. Specifi cally, our thesis indicates that the rodent infralimbic cortex (i.e., area IL) is most closely related to the primate vmPFC based on both connectivity and function. On the other hand, the rodent prelimbic cortex (i.e., area PL) exhibits gradients of connectivity that makes it a likely precursor of several regions found in the primate PFC. Specifi cally, the evidence reviewed here supports that PL is a precursor of areas belonging to the primate medial and lateral stream regions such as dmPFC area 32, and dorsolateral PFC ( dlPFC) areas 10, 9, and 8. The notion of a single rodent-like precursor of several primate cortical areas is not new and has been utilized to explain evolutionary expansion and diff erentiation in the sensorimotor system (Kaas 2004). Here, we extend the notion of an evolutionary precursor to prefrontal circuitry, providing a clearer context for relating rodent functional data to primate cognition. Consistent with our proposal, we point to T. belangeri, an evolutionary intermediate whose prelimbic cortex contains an area that is granular, a microcircuit feature that establishes its correspondence to primate dlPFC.</p>
<p><strong>The Prelimbic Cortex As a Precursor of Dorsomedial and Dorsolateral Prefrontal Cortex</strong></p>
<p>The cerebral cortex has undergone signifi cant changes and diff erentiations throughout evolution, providing space for the development of distinct cortical areas with specialized functions. The evolution of somatomotor control, for example, from simple refl exive movements to highly coordinated and precise voluntary actions, is associated with a signifi cant cortical expansion and segregation as well as neuronal specialization. Indeed, the Bauplan of the brain of opossums resembles that of small-brained placental mammals in all but one aspect: it contains a “somatosensory-motor amalgam,” with a complete overlap of somatosensory representation and motor control maps (Dooley et al. 2014; Karlen and Krubitzer 2007; Wong and Kaas 2009a). Since marsupials diverged from placental mammals around 130 million years ago, Kaas (2004) proposed that this somatosensory-motor amalgam could be considered a “precursor area” of the architectonically distinct sensory and motor areas found in the brains of the latter infraclass. Small placental mammals, including tenrecs (Krubitzer et al. 1997), hedgehogs (Catania et al. 2000), or rats (Haghir et al. 2023), present a distinct primary motor cortex (M1), and in most cases their somatosensory region encompasses four areas: a primary (S1) and a secondary (S2) somatosensory area as well as rostral and caudal somatosensory belt areas. A secondary motor area has also been described in the rat brain, and some of these species present a further somatosensory area located ventrocaudally to S2 (for a comprehensive review, see Kaas 2004). In addition to these two motor and fi ve somatosensory areas, the brain of tree shrews (the closest relatives of primates) presents a rudimentary somatosensory posterior parietal area (Wong and Kaas 2009a). A further diff erentiation occurs in the brains of small primates such as galagos (Wu and Kaas 2003) and slow lorises (Carlson and Fitzpatrick 1982), which display additional somatosensory areas located in the lateral fi ssure. In macaque monkeys, but not in marmosets, the caudal somatosensory belt area developed further into areas 1 and 2 (Kaas 2004), and three subfi elds can be identifi ed within M1 (Rapan et al. 2023). This cortical segregation reaches its apex in humans, where both the motor and somatosensory cortex have expanded signifi cantly in terms of size and complexity to enable fi ner control of movements, including intricate fi nger and hand movements, as well as the production of speech, and enhance the individual’s capacity for motor planning and decision making. The gradual changes in cytoarchitecture associated with the phylogenetically related emergence of multiple areas from the marsupial somatosensory-motor amalgam are in line with the “gradation theory” postulated by Sanides (1962) to explain cortical diff erentiation in the human PFC. Specifi cally, his systematic analysis revealed that segregation in the human PFC is associated with discontinuous step-wise changes of cytoarchitectonic features which not only follow phylogenetically related cortical expansion (i.e., when moving medio-laterally from allocortical through mesocortical to neocortical areas), but also when moving in a poleward direction throughout the prefrontal neocortex (Sanides 1962). Below, we present both structural and functional evidence in support of the framework that rodent area PL could be considered a precursor of primate dmPFC area 32 and of areas belonging to the primate dlPFC.</p>
<p><strong>Structural Studies</strong></p>
<p>The prelimbic cortex occupies a very large area of the prefrontal cortex in rodents. In rats, the PL extends rostro-caudally for about 3 mm, from the anterior pole of the PFC, sitting above the medial orbital cortex, to caudally situated dorsal to IL (Swanson 2004). While PL has generally been regarded as a single entity, recent evidence leads us to propose that PL may anatomically and functionally consist of two major divisions: rostrodorsal and caudoventral divisions. Specifi cally, there are notable anatomical diff erences between these two parts of PL with respect to both their inputs and outputs. For instance, in an early examination of PFC projections to the striatum, Berendse et al. (1992) reported that the dorsal part of PL projected to mid-regions of the dorsal striatum, whereas ventrally PL selectively distributed to the nucleus accumbens (ACB), and we could confirm this distinction.</p>
<p>As is well established, the mediodorsal nucleus (MD) of the thalamus is strongly connected reciprocally with the mPFC. However, the caudoventral PL distributes specifi cally to the medial segment of MD, whereas the rostrodorsal PL projects selectively to the lateral segment of MD (Groenewegen 1988; Vertes 2004). Taken together, this pattern indicates that the rostrodorsal PL communicates primarily with action/premotor-associated structures and may therefore serve a role in executive control, similar to areas of the primate dlPFC. On the other hand, caudoventral PL is strongly interconnected with limbic structures and may accordingly be involved primarily in aff ective behaviors, comparable to those of area 32 of primates. With respect to limbic connections, the caudoventral PL receives pronounced projections from the hippocampus, mainly originating from CA1 and the subiculum of the ventral hippocampus. Thalamic aff erents to this division of PL arise predominantly from medial/central regions of the thalamus including MD (as mentioned above), rostral intralaminar nuclei, and the midline nuclei: the paraventricular, paratenial, rhomboid, and reuniens (RE) nuclei (Hoover and Vertes 2007; Vertes 2004, 2006). Finally, the caudoventral PL receives signifi cant projections from the basal nuclei of the amygdala as well as from monoaminergic nuclei (e.g., dopaminergic, noradrenergic and serotonergic) of the brainstem. It is well recognized that the monoaminergic nuclei exert pronounced modulatory eff ects on PL in aff ective and cognitive functions (Friedman and Robbins 2022).</p>
<p>With some exceptions, the output of caudoventral PL parallels its input (Hoover and Vertes 2007; Vertes 2004). Cortically, this caudoventral PL strongly targets other prefrontal cortical regions, including the medial orbital cortex, the dorsal and ventral agranular insular cortex, the anterior piriform cortex, and the entorhinal cortex. Subcortically, caudoventral PL distributes heavily to (a) the ACB, olfactory tubercle, and claustrum of the basal forebrain; (b) the central and basal nuclei of the amygdala; (c) the MD, intermediodorsal, paraventricular, paratenial, reuniens, and centromedial thalamic nuclei; and (d) the substantia nigra, pars compacta, ventral tegmental area, and dorsal and median raphe nuclei of the midbrain. In summary, the inputs and outputs of the caudoventral PL largely mirror those of area 32 of primates.</p>
<p><strong>Functional Studies</strong></p>
<p>While the debate on the rodent homologue of the dlPFC of primates may never be resolved to everyone’s satisfaction, primates (especially humans) possess abilities that undeniably exceed those of rodents, and this undoubtedly is tied to cortical evolution including that of the dlPFC. Still, it must be acknowledged that rodents exhibit executive functions that are classically attributed to primate dlPFC. In addition to the anatomical evidence discussed above, behavioral evidence suggests that rostrodorsal PL is a “ functional homologue” of primate dlPFC.</p>
<p>Granon and Poucet (2000) were among the fi rst to make this proposal. Specifi cally, they reviewed evidence showing that alterations of PL in rodents (but not other mPFC regions) produced severe impairments on various spatial and nonspatial delay tasks. This indicated a profound working memory defi cit—a hallmark of damage to the dlPFC. The working memory defi cits were part of a constellation of cognitive impairments produced by alterations of PL that included attentional defi cits. In addition, Granon and Poucet pointed out that rostrodorsal PL is reciprocally connected to the lateral subdivision of the MD, paralleling primate dlPFC projections to the lateral MD (Granon and Poucet 2000). Several other studies described similar reciprocal connections between PL and lateral MD in rodents (Bolkan et al. 2017; Mukherjee et al. 2020; Schmitt et al. 2017; Wolff et al. 2008). Granon and Poucet (2000:235) concluded that “in both species [rodents and primates], the prefrontal cortex, seems to share some common function in those aspects of cognitive processing that, in humans, are usually referred to as executive functions. Within the rat prefrontal cortex, the prelimbic area appears to play a central role in such processes.”</p>
<p>Several subsequent reports have confi rmed the role of PL of rodents in working memory and in several additional cognitive functions including attentional processes, set shifting behavior, reversal learning, and decision making (for reviews, see Chudasama 2011; Friedman and Robbins 2022). Specifically, these are all functions that in primates are associated with activation of the dlPFC.</p>
<p>Physiological evidence also supports the idea that the rostrodorsal PL and dlPFC are homologous. Classical work by Fuster, Goldman-Rakic, and others (Funahashi et al. 1993b; Fuster and Alexander 1971) have shown that neurons in the dlPFC exhibit persistent increase in spike rates in the context of working memory, which has been considered to be a cellular correlate for this cognitive process (Fuster and Alexander 1971). Newer studies have corroborated these observations, albeit they emphasize a persistent network activity pattern (rather than individual neurons) and perhaps temporally sparser patterns of working memory correlates at the level of single neurons (Lundqvist et al. 2016). Consistent with these latter observations, and with the PL homology, multiple studies have found evidence for persistent network activity patterns in the context of working memory tasks. For example, Bolkan et al. (2017) found evidence for a sequential PL activity pattern in the context of a spatial working memory task. Interestingly, this activity pattern was not spatially specifi c, potentially refl ective of the PL’s function in the generation of abstract rules, which are a known attribute of dlPFC. This was corroborated by data from Schmitt et al. (2017), who trained mice on a cross-modal attentional control task where mice selected between visual and auditory target stimuli based on a cue that varied on a trial-by-trial basis. Out of several cortical areas inactivated in the PFC, including orbitofrontal cortex, anterior cingulate cortex, and premotor cortex, only the PL showed a delay period specifi c eff ect (Wimmer et al. 2015). Recordings from the PL showed a persistent network activity pattern over the delay, where single neurons exhibited a temporally precise increase in fi ring rate tiling the delay period (sequential activity pattern). These network patterns where “rule specifi c” (Rikhye et al. 2018; Schmitt et al. 2017), consistent with the fi nding from primate dlPFC which showed the highest proportion of neurons encoding abstract rules in working memory tasks (Wallis et al. 2001). Perhaps the most compelling link to the specifi city of these observations to the rostrodorsal PL is the work by Nakajima et al. (2019), which showed that neurons in this particular region project to the dorsal striatum (Figure 3.2a) and exhibit activity patterns consistent with attentional modulation (Figure 3.2b, c).</p>
<p>Lastly, in studying the architectonic subdivisions of the neocortex of the tree shrew, T. belangeri, a close relative of primates, Wong and Kaas (2009a) found that the PL of that species (and which they designated as area MF) contained a well-developed layer 4, which was densely populated with granule cells. This suggests that area PL of rodents, which occupies the same relative position as area MF of tree shrews, dorsally on the medial wall of the PFC, could be the antecedent of the granule cell layer of primates. Consistent with this notion, we show comparative sections of this region across rats, Tupaia, and macaques.</p>
<p><strong>Homology between Infralimbic Cortex and vmPFC</strong></p>
<p>Whereas the rodent homologue to the dlPFC of primates remains controversial, there appears to be a general consensus that ventral parts of the mPFC of rodents are anatomically and functionally equivalent to the agranular ventral medial PFC ( vmPFC) of primates. More specifi cally, area IL of rodents appears anatomically homologous to area 25 (A25) of primates.</p>
<p>For instance, the IL of rodents and A25 of primates serve well-recognized roles in autonomic, visceral, and aff ective functions. IL has been described as a visceromotor cortex. The projections of IL refl ect its involvement in visceral/ aff ective functions. Specifi cally, Vertes (2004) examined IL projections in rats and showed that IL distributes to several sites of the forebrain and brainstemlinked to autonomic and aff ective behavior. These included orbitofrontal cortices, shell of nucleus accumbens (sACB), lateral septum, bed nucleus of stria terminalis (BST), medial and lateral preoptic nuclei, central nucleus of the amygdala, lateral and posterior nuclei of the hypothalamus, and the periaqueductal gray, parabrachial nucleus and solitary nucleus of the brainstem. Each of the structures has been shown to modulate autonomic/visceral activity, and thus emotional behavior, and importantly as a group, these nuclei receive input almost exclusively from IL and little from PL.</p>
<p>Although fewer reports have examined vmPFC (or A25) projections in primates, A25 projections in the monkey appear to directly parallel those of IL in rodents. Specifi cally, an early report by Chiba et al. (2001) compared the eff erent projections of A25 (IL) and A32 (PL) in the Japanese monkey and showed that the output of A25, like that of IL in rodents, strongly targeted sites involved in autonomic/visceral control, primarily including the sACB, the preoptic area, BST, central nucleus of the amygdala (CeM) and the periaqueductal gray and parabrachial nucleus of the brainstem. They thus concluded that their fi ndings “support the hypothesis that IL is a major cortical autonomic motor area.” Several subsequent examinations of A25 projections in monkeys and have similarly demonstrated that A25 prominently distributes to several “visceral-related” subcortical structures of the basal forebrain, amygdala, hypothalamus and brainstem (Barbas et al. 2003; Ghashghaei et al. 2007; Heilbronner et al. 2016; Joyce and Barbas 2018; Rios-Florez et al. 2021; Roberts et al. 2007). Major targets included the ACB, BST, central nucleus of the amygdala, posterior and lateral nuclei of the hypothalamus, periaqueductal gray and parabrachial nucleus.</p>
<p>Barbas et al. (2003) described projections from mPFC in primates, including A25, to discrete nuclei of the amygdala and hypothalamus that directly distribute to (autonomic) brainstem and spinal cord nuclei which innervate peripheral autonomic sites. This system of connections linked mPFC/A25 with autonomic eff ector sites in the modulation of visceral functions and emotional behavior. However, in subsequent studies Barbas and colleagues have suggested that the connections of posterior OFC with the intercalated cell masses of the amygdala more resemble rodent IL, than primate A25 (Zikopoulos et al. 2017).</p>
<p>In contrast, Heilbronner et al. (2016) compared the projections to the striatum from A25 in macaques and IL in rats. Specifi cally, they fi rst identifi ed a region of the sACB (termed the “striatal emotion processing network” or EPN) and conserved across these species. The EPN is a convergence zone of projections from the amygdala and hippocampus to the sACB. Importantly, they showed that both IL and A25 distributed heavily to the striatal EPN, whereas other prefrontal cortical areas (of both species) projected at best weakly to EPN. They concluded that “consistent with prior literature, the infralimbic cortex and area 25 are likely homologous” (Heilbronner et al. 2016:509). Future studies should perform whole brain connectivity fi ngerprints across species for a more comprehensive comparison. However, it should be noted that even if rodent IL and primate A25 show overall similar connectivity patterns, the evolutionary expansion of the PFC may endow primate A25 with unique interregional connectivity patterns and divergent functions.</p>
<p>Recently, Roberts and colleagues (Alexander et al. 2023) comprehensively reviewed the structural and functional properties of the vmPFC across species (rat, monkey, human) and cited evidence showing that (a) the IL of rats and A25 of primates show some functional homology/analogy in the regulation of behavior in the reward domain but not in the punishment domain. Specifi cally, they showed that A25 overactivation in marmosets blunted Pavlovian approach and motivated responding, comparable to that reported following similar manipulations in rodents. In marked contrast, the same manipulation heightened behavioral and cardiovascular responsivity to both proximal and distal threat, opposite to that reported in rodent IL. This suggests that IL and A25 may act similarly within reward networks but their roles may have diverged within threat networks illustrating the complexity of cross-species functional comparisons. Roberts and colleagues also showed (b) that IL/A25 and PL/ A32 predominantly serve distinct and separable functions, with A25 mainly involved in cardiovascular and aff ective functions and A32 in cognitive functions. A cytoarchitectonically informed meta-analysis of functional imaging studies in humans provides further evidence for this functional segregation of A25 and A32 (Palomero-Gallagher et al. 2015). For instance, with respect to diff erences between A25 and A32, Wallis et al. (2017) demonstrated that inactivation of A25 produced pronounced cardiovascular changes, whereas inactivation of A32 had no cardiovascular eff ects, and further that A25 and A32 mediated opposite eff ects on a Pavlovian fear conditioning and extinction paradigm: A25 inactivation decreased fear-elicited behavior responses promoting extinction, whereas A32 inactivation enhanced these responses thereby suppressing extinction.</p>
<p>Lastly, Diehl and Redish (2023) have performed comprehensive recordings across the rat mPFC in the context of a foraging task termed “restaurant row.” This task combines multiple cognitive elements including associative learning, working memory, switching, and value-based judgments. Although they found that all prefrontal areas encode the various relevant task variables, there was clear specialization, with the IL clearly encoding more value-related cognitive variables than executive or sensorimotor ones. This is consistent with an earlier report, in which Hardung et al. (2017) examined the neural substrates for response inhibition across areas of the rodent frontal cortex using both optogenetic inactivation and electrophysiological recordings. Strikingly, inactivation of the PL and IL had opposite eff ects on the behavior, where PL inactivation increased and IL inactivation decreased premature responses. Electrophysiological recordings were also consistent with opposing roles for these two subregions, again, consistent with the idea that PL shares functional homology with the primate lateral stream whereas the IL is medial (and evaluative).</p>
<p><strong>Conclusions</strong></p>
<p>Building on the two-stream notion of human (or generally primate) PFC, the collective evidence reviewed in this chapter argues for homology with the two major divisions of rodent PFC: the PL and IL. The argument implicitly makes a prediction about how the rostrodorsal PL may have disconnected from the IL throughout evolution, and subsequently pushed laterally to form what is currently recognized as dlPFC of primates. The fact that T. belangeri MF is granular is consistent with this idea. Overall, we hope this synthesis will stimulate further discussion and motivate the design of new experiments to test this hypothesis directly.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
