<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Working memory &#8211; Michael Halassa | Science</title>
	<atom:link href="https://michaelhalassa.net/working-memory/feed/" rel="self" type="application/rss+xml" />
	<link>https://michaelhalassa.net</link>
	<description>Just another Darin Hardy Site Sites site</description>
	<lastBuildDate>Mon, 01 Sep 2025 12:08:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Time is Memory</title>
		<link>https://michaelhalassa.net/time-is-memory/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Mon, 01 Sep 2025 12:08:25 +0000</pubDate>
				<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Working memory]]></category>
		<category><![CDATA[Cognitive Research]]></category>
		<category><![CDATA[Cognitive Science]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Temporal Memory]]></category>
		<category><![CDATA[Time]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=789</guid>

					<description><![CDATA[Michael Halassa discusses how the brain may create the sense of memory and why time distortions happen in experience]]></description>
										<content:encoded><![CDATA[<p>Over the past year, I’ve found a new favorite running trail. It winds through woods, follows riverbanks, and slips through an old industrial complex. The scenery shifts constantly, broken into short, distinct segments.</p>
<p>I was surprised to discover that the run takes about an hour, almost exactly the same as my old trail from the year before. The distances are nearly identical too, which makes sense given that my pace hasn’t changed. And yet, the new trail <em>feels</em> much longer. How come?</p>
<p>The old route was simpler. It had three long, straight stretches where I could see the end from the beginning. Easy to remember, easy to chunk. The new one is nothing like that: shorter segments, sharper turns, and ever-changing backdrops. Every few minutes you’re in a completely new setting, never quite sure what’s around the bend.</p>
<p>That difference got me thinking about how we perceive time. We’ve all had those strange distortions: a memory from years ago that feels recent, or something from last week that feels impossibly distant. Time in the brain is slippery.</p>
<p>So how do we actually track it? Is there an internal clock ticking away? Probably not: decades of searching haven’t turned one up. A more likely explanation is that time is tied to how memories are organized and indexed. Let’s dig into what we actually know.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!EcUq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1456w" type="image/webp" sizes="100vw" /><img fetchpriority="high" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!EcUq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f 3907 4569 a2f0" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2934757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 1"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>How Memory Creates Time</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§how-memory-creates-time" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/how-memory-creates-time"></button></div>
</div>
<p>The first clue comes from studying what happens when we remember. In a clever set of experiments, Olivier Jeunehomme and Arnaud D’Argembeau asked people to wear small automatic cameras while walking around a university campus. The cameras snapped photos every few seconds, creating an objective record of the experience. Later, participants were asked to verbally recall their walks while being audio-recorded.</p>
<p>The campus walks lasted around 40 minutes, but when participants replayed them aloud in memory, the descriptions only took about 5 minutes on average. That is roughly an eightfold compression of time.</p>
<p>The compression, however, was uneven. The researchers compared the recall transcripts to the time-stamped camera sequences and divided the narratives into what they called “experience units.” These were discrete remembered moments, such as buying a coffee, turning into a courtyard, or chatting with a classmate. Each unit was mapped back to the original footage so they could calculate how much real-world time it spanned.</p>
<p>The pattern was striking. Short, bounded activities with a clear goal, like making a purchase or opening a door, tended to be preserved in relatively high detail, replayed at about four to five times compression. In contrast, transitional stretches of locomotion, like walking from one building to the next, were compressed far more, sometimes by a factor of twenty or more. Long, uneventful stretches collapsed into a single unit, while activity-rich episodes retained much finer granularity.</p>
<p>These experience units appear to be the basic building blocks of episodic memory. The density of such units determines how long an episode feels in retrospect. More units per minute of clock time make for a richer memory trace and an expanded sense of duration. Fewer units create a thinner trace and a contracted sense of time.</p>
<p>Follow-up studies have highlighted the special role of event boundaries. Jeunehomme and D’Argembeau found that moments marking a change in context, such as entering a building, turning a corner, or meeting a person, were about five times more likely to be recalled than stretches in between. Boundaries act like bookmarks, segmenting the stream of experience and anchoring the flow of time in memory. These anchors not only determine what is remembered, but also shape how long the remembered experience feels.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!cw5r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!cw5r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9 ec2f 4660 b5d9" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3375647,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 2"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>The Paradox of Event Boundaries</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-paradox-of-event-boundaries" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/the-paradox-of-event-boundaries"></button></div>
</div>
<p>Experience units and event boundaries create a fundamental paradox in how we perceive time. Bangert and colleagues (2019, 2020) ran a series of experiments in which participants watched short films of everyday activities while making timing judgments. The films were paused at different points, and participants were asked to estimate whether a brief interval, usually around five seconds, had just passed. The twist was that sometimes the interval contained an event boundary, such as finishing washing dishes and beginning to dry them, and sometimes it did not. Intervals that contained a boundary were consistently judged as shorter than otherwise identical spans without one.</p>
<p>The mechanism behind this compression may become clearer when considering what&#8217;s happening in working memory. Swallow and colleagues (2009) tracked this directly by having participants watch movie clips while objects appeared on screen, a knife during sandwich-making, a towel during dishwashing. Five seconds later, the movie would pause for a recognition test. Objects present at event boundaries were recognized significantly better than those at non-boundaries. But this enhancement came with a cost: memory for objects from just before a boundary dropped dramatically. The boundary created a barrier, making it harder to retrieve information from the previous event even though it had occurred mere seconds earlier.</p>
<p>Event Segmentation Theory, developed by Jeffrey Zacks and colleagues in 2007, provides the framework. According to their theory, event boundaries are when the brain discards its current &#8220;event model&#8221; from working memory and uploads a new one. This updating process requires attention, which leaves fewer resources available for keeping track of time. As Bangert and colleagues (2020) demonstrated using dual-task paradigms, devoting attention to updating perceptual and conceptual features of the activity left fewer attentional resources for accumulating temporal information. It&#8217;s like trying to count seconds while also solving a puzzle &#8211; each boundary forces you to solve a new puzzle, and your counting falters.</p>
<p>The paradox is that the very same boundaries that compress time during experience expand it in memory. They serve as landmarks that structure recall, making events feel more spacious in retrospect. This dual effect helps explain a familiar puzzle: why the drive home from a new place usually feels longer than the drive there. On the outbound trip, the brain is constantly updating its models: pass the gas station (boundary), turn at the intersection (boundary), merge onto the highway (boundary). Each update reduces attention for tracking duration, so the drive feels shorter while you are in it. Yet those boundaries also create anchors that expand the memory of the trip. On the return drive the route is familiar, there are fewer surprises, and the brain needs fewer updates. With less attention diverted, duration is tracked more faithfully, so the drive feels longer in the moment but compresses more in memory.</p>
<p>Bangert and colleagues (2019) also tested temporal proximity, asking participants to judge how far apart two moments in the film felt. Boundaries made items seem further apart in time, even when the objective duration was identical. In this sense, boundaries insert psychological distance between moments. They stretch the remembered timeline even while compressing the lived experience of duration.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!REqt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!REqt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!REqt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!REqt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824 837f 43ed ad2a" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3109330,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 3"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>The Implications</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-implications" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/the-implications"></button></div>
</div>
<p>This framework explains a wide range of everyday paradoxes. Vacations, filled with novelty, fly by while they happen but expand richly in memory. Daily routines, stripped of boundaries, drag while we live them but collapse into nothing when recalled. Clewett and Davachi (2017) argued that the ebb and flow of experience itself determines the temporal structure of memory. Lositsky and colleagues (2016) showed that the greater the number and diversity of boundaries, the more time expands in recall.</p>
<p>It explains my running puzzle. My old trail was made up of long, predictable stretches, so it generated relatively few event boundaries. My new trail, by contrast, forced segmentation at every turn: woods to riverbank, riverbank to industrial ruins, sharp corner, sudden hill, unexpected vista. Each transition became a boundary, a new chunk in memory. The clock says both trails take about an hour, but memory disagrees. The old one collapses into a few coarse segments, while the new one expands into a much longer-feeling journey.</p>
<p>The principle is simple: if you want something to feel substantial in memory, add boundaries. Change contexts, vary activities, create moments that require updates. If you want time to flow by quickly, keep it continuous and predictable.</p>
<p>But the implications go deeper than personal experience design. This mechanism may explain why time seems to accelerate as we age. Childhood is packed with firsts, each creating boundaries: first day of school, first sleepover, first kiss. Adult life, especially in stable careers and relationships, can become a series of similar days bleeding into each other. The years feel shorter not because our metabolism changes or because of some cosmic injustice, but because we&#8217;re creating fewer distinct memory segments.</p>
<p>The brain doesn&#8217;t keep time like a clock. It builds time from its internal dynamics. The elasticity of time isn&#8217;t an illusion; it&#8217;s how the mind constructs a temporal dimension from the boundaries of experience.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset"><picture><source srcset="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490 94cf 4fd0 9d0f" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3086532,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 4"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<div>
<hr />
</div>
<p><em>If you enjoyed this piece, let me know. I’d love to hear how you’ve experienced time stretching or compressing in your own life. I’ll also be following up with another post that digs into the neural substrates of time perception, exploring how brain circuits generate these distortions.</em></p>
<p><em>If you’d like to read that when it comes out, consider subscribing or sharing this piece with someone who might find it interesting.</em></p>
<div>
<hr />
</div>
<h2 class="header-anchor-post"><strong>Bibliography</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§bibliography" class="pencraft pc-reset header-anchor offset-top"></div>
<p><button class="pencraft pc-reset pencraft iconButton-mq_Et5 iconButtonBase-dJGHgN buttonBase-GK1x3M buttonStyle-r7yGCK size_sm-G3LciD priority_secondary-S63h9o" tabindex="0" type="button" aria-label="Link" data-href="https://michaelhalassa.substack.com/i/171598378/bibliography"></button></div>
</div>
<p>Bangert, A. S., Kurby, C. A., Hughes, A. S., &amp; Carrasco, O. (2019). Crossing event boundaries changes prospective perceptions of temporal length and proximity. <em>Attention, Perception, &amp; Psychophysics</em>, 81(8), 2459-2472.</p>
<p>Block, R. A., &amp; Zakay, D. (1997). Prospective and retrospective duration judgments: A meta-analytic review. <em>Psychonomic Bulletin &amp; Review</em>, 4(2), 184-197.</p>
<p>Clewett, D., &amp; Davachi, L. (2017). The ebb and flow of experience determines the temporal structure of memory. <em>Current Opinion in Behavioral Sciences</em>, 17, 186-193.</p>
<p>Jeunehomme, O., &amp; D&#8217;Argembeau, A. (2020). Event segmentation and the temporal compression of experience in episodic memory. <em>Psychological Research</em>, 84(2), 481-490.</p>
<p>Lositsky, O., Chen, J., Toker, D., Honey, C. J., Shvartsman, M., Poppenk, J. L., &#8230; &amp; Norman, K. A. (2016). Neural pattern change during encoding of a narrative predicts retrospective duration estimates. <em>eLife</em>, 5, e16070.</p>
<p>Swallow, K. M., Zacks, J. M., &amp; Abrams, R. A. (2009). Event boundaries in perception affect memory encoding and updating. <em>Journal of Experimental Psychology: General</em>, 138(2), 236-257.</p>
<p>Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S., &amp; Reynolds, J. R. (2007). Event perception: A mind-brain perspective. <em>Psychological Bulletin</em>, 133(2), 273-293.</p>
<p>Zacks, J. M., Kurby, C. A., Eisenberg, M. L., &amp; Haroutunian, N. (2011). Prediction error associated with the perceptual segmentation of naturalistic events. <em>Journal of Cognitive Neuroscience</em>, 23(12), 4057-4066.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Brain&#8217;s &#8220;What If&#8221; Engine: Why Counterfactuals Are Key to Human Intelligence</title>
		<link>https://michaelhalassa.net/counterfactuals-human-intelligence/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Sun, 03 Aug 2025 23:04:06 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Prefrontal cortex]]></category>
		<category><![CDATA[Working memory]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[Recurrent Neural Networks]]></category>
		<category><![CDATA[research paper]]></category>
		<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=785</guid>

					<description><![CDATA[Michael Halassa discusses recent work on counterfactual reasoning and its contribution to human cognition]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve always been fascinated by the kinds of thoughts we <em>don&#8217;t</em> act on. In psychiatry, they shape regret, resilience, and rumination. In neuroscience, they reveal a deep truth about how the brain handles uncertainty. Every morning when I&#8217;m running late, I catch myself thinking: &#8220;If only I&#8217;d left five minutes earlier.&#8221; It&#8217;s a fleeting thought, but it represents one of the most computationally sophisticated processes our brains perform: imagining alternative realities that never happened.</p>
<p>Every day, your brain performs millions of &#8220;what if&#8221; calculations without you even noticing. What if I had taken the other route to work? What if I hadn&#8217;t said that in the meeting? What if the ball bounces differently than expected? This capacity for <strong>counterfactual reasoning</strong>, imagining alternative realities that never actually occurred, represents one of the most sophisticated computational achievements of biological intelligence.</p>
<p>A groundbreaking new study published in <em>Nature Human Behaviour</em> by Ramadan, Tang, Watters, and Jazayeri has shed new light on why humans rely on these mentally expensive &#8220;what if&#8221; simulations, revealing computational constraints that force our brains into remarkably clever problem-solving strategies. Their findings illuminate human cognition and change how we understand intelligence itself.</p>
<h2>The Computational Mystery: Why Do We Think in &#8220;What Ifs&#8221;?</h2>
<p>From a purely computational standpoint, counterfactual reasoning seems inefficient. When facing complex decisions, optimal algorithms should simply compute the joint probability of all possible outcomes and pick the best option. So why do humans constantly engage in the seemingly wasteful exercise of imagining alternatives?</p>
<p>The answer, as Ramadan and colleagues discovered, lies in the fundamental constraints that shape how our brains process information. Using an ingenious H-maze task where participants had to track an invisible ball through branching pathways, they uncovered three critical computational bottlenecks that force human cognition into hierarchical and counterfactual strategies:</p>
<p><strong>1. Parallel Processing Bottleneck</strong>: Our brains cannot track all possible trajectories simultaneously. We must break complex problems into sequential, hierarchical steps.</p>
<p><strong>2. Counterfactual Processing Noise</strong>: When we engage in &#8220;what if&#8221; thinking, our working memory introduces noise that degrades the fidelity of these mental simulations.</p>
<p><strong>3. Rational Resource Allocation</strong>: Humans adaptively adjust their reliance on counterfactuals based on how much these mental simulations cost them.</p>
<h2>Very Clever Use of Recurrent Neural Networks in Modeling Features of the Human Mind</h2>
<p>The research reveals profound insights about intelligence itself. When Ramadan et al. created artificial neural networks and subjected them to the same computational constraints humans face, something remarkable happened: only the networks constrained by all three bottlenecks reproduced human-like behavior.</p>
<p>This finding demonstrates the power of using recurrent neural networks to model human cognition. By constraining artificial networks with the same limitations that shape human thinking, Ramadan et al. created systems that behave remarkably like people. The key insight is that RNNs can capture mental processes like hierarchical and counterfactual reasoning when they face the same computational bottlenecks humans do.</p>
<h3>Neural Architecture of Counterfactual Reasoning</h3>
<p>The neural implementation of counterfactual reasoning tells a more complex story beyond frontal control. Van Hoeck and colleagues&#8217; landmark fMRI study revealed that counterfactual thinking engages a distributed network that hijacks the brain&#8217;s episodic memory system.</p>
<p>When participants imagined &#8220;upward counterfactuals&#8221; (better outcomes for negative past events), their brains activated the same core memory network used for remembering the past and imagining the future: hippocampus, posterior cingulate, inferior parietal lobule, lateral temporal cortices, and medial prefrontal cortex.</p>
<p>What makes counterfactual reasoning computationally expensive becomes clear in this neural architecture. Counterfactual thinking recruited these memory regions more extensively than episodic past or future thinking, and additionally engaged bilateral inferior parietal lobe and posterior medial frontal cortex.</p>
<p>The extra brain activity reflects just how demanding this kind of mental juggling really is: counterfactual reasoning requires simultaneously maintaining factual and contrafactual representations while actively inhibiting the dominant factual reality.</p>
<p>The brain has evolved specialized circuitry for tracking &#8220;what might have been.&#8221; Boorman and colleagues discovered that lateral frontopolar cortex, dorsomedial frontal cortex, and posteromedial cortex form a dedicated network for encoding counterfactual choice values: tracking not just what happened, but whether alternative options might be worth choosing in the future.</p>
<p>This network operates in parallel to the ventromedial prefrontal system that tracks the value of chosen options, suggesting that the brain maintains separate computational channels for factual and counterfactual value processing.</p>
<p>Perhaps most remarkably, recent work has shown that counterfactual information fundamentally transforms how the brain codes value itself. When counterfactual outcomes are available, medial prefrontal and cingulate cortex shift from absolute to relative value coding.</p>
<p>Think of it this way: losing $10 feels terrible if you could have won $50, but feels great if you could have lost $100. The same neural outcome is processed as positive in a loss context (absence of punishment) but negative in a gain context (absence of reward).</p>
<p>This neural flexibility mirrors the adaptive computational strategies revealed in behavioral studies: the brain dynamically reconfigures its representational schemes based on available information and processing constraints.</p>
<p>These findings illuminate why counterfactual reasoning is both computationally expensive and evolutionarily preserved. The enhanced neural demands reflect genuine computational costs: maintaining multiple alternative representations, binding novel scenario elements, and managing conflict between factual and counterfactual worlds. Yet this system enables the kind of flexible, context-sensitive reasoning that allows humans to learn from paths not taken and adapt behavior based on imagined alternatives.</p>
<h2>The Bounded Rationality Renaissance</h2>
<p>These discoveries are part of a broader renaissance in understanding <strong>bounded rationality</strong>, the idea that intelligent behavior emerges not from perfect optimization, but from smart adaptations to computational limitations.</p>
<p>Herbert Simon&#8217;s revolutionary concept of bounded rationality challenged the assumptions of perfect rationality in classical economic theory, proposing instead that individuals &#8220;satisfice&#8221; (seeking good enough solutions rather than optimal ones) due to limitations in computation, time, information, and cognitive resources.</p>
<p>Simon&#8217;s work recognized that &#8220;perfectly rational decisions are often not feasible in practice because of the intractability of natural decision problems and the finite computational resources available for making them.&#8221; This insight has profound implications for both understanding human cognition and designing artificial intelligence systems.</p>
<h3>The Bigger Picture</h3>
<p>The Ramadan study reveals something profound: the cognitive strategies we think of as distinct (hierarchical reasoning, counterfactual thinking, simple optimization) actually lie along a continuum. Human intelligence dynamically shifts between these approaches based on available mental resources and task demands.</p>
<p>This has implications beyond neuroscience. If counterfactual reasoning emerges from computational constraints rather than being hardwired, it suggests these &#8220;what if&#8221; processes might be fundamental to any sufficiently complex intelligence, biological or artificial.</p>
<h2>Clinical Frontiers: When Counterfactuals Break Down</h2>
<p>From a clinical perspective, this research offers new windows into psychiatric and neurological conditions. Counterfactual reasoning depends on integrative networks for affective processing, mental simulation, and cognitive control. These are systems that are systematically altered in psychiatric illness and neurological disease.</p>
<p>Consider a patient with OCD who gets trapped in endless loops of &#8220;what if I didn&#8217;t check the door?&#8221; or someone with depression whose counterfactual thinking spirals into &#8220;if only I were different, everything would be better.&#8221; Understanding the computational basis of these patterns could lead to more targeted therapeutic approaches.</p>
<p>Patients with schizophrenia show specific deficits in counterfactual reasoning when complex non-factual elements are needed to understand social environments. By mapping how these computational processes break down, we&#8217;re gaining new tools for both diagnosis and treatment.</p>
<h2>The Bottom Line: Constraints as Features</h2>
<p>The story of counterfactual reasoning is a story about the power of constraints. What initially appears to be a computational limitation (our inability to process all information in parallel) turns out to be the very foundation of human cognitive flexibility.</p>
<p>The human brain&#8217;s &#8220;what if&#8221; engine represents an elegant solution that emerges from the interplay between computational constraints and adaptive intelligence. As we stand on the brink of artificial general intelligence, perhaps the secret lies not in building systems that can process everything at once, but systems that can gracefully adapt to the fundamental constraints that shape all intelligence.</p>
<p>The future of AI may not lie in eliminating human limitations, but in understanding why those limitations exist and what remarkable capabilities they make possible.</p>
<hr />
<p><em>This convergence of neuroscience, cognitive science, and AI represents a fundamental shift in how we understand intelligence. Rather than seeing computational constraints as problems to solve, we&#8217;re beginning to recognize them as the very features that make flexible, adaptive intelligence possible. The brain&#8217;s &#8220;what if&#8221; engine may be a blueprint for the next generation of truly intelligent machines.</em></p>
<p>The next time you wonder what might have been, remember: that question may be the very core of what makes you human.</p>
<hr />
<h2>Bibliography</h2>
<p>Boorman, E. D., Behrens, T. E., &amp; Rushworth, M. F. (2011). Counterfactual choice and learning in a neural network centered on human lateral frontopolar cortex. <em>PLoS Biology</em>, 9(6), e1001093.</p>
<p>Pischedda, D., Palminteri, S., &amp; Coricelli, G. (2020). The effect of counterfactual information on outcome value coding in medial prefrontal and cingulate cortex: From an absolute to a relative neural code. <em>Journal of Neuroscience</em>, 40(16), 3268-3277.</p>
<p>Ramadan, M., Tang, C., Watters, N., &amp; Jazayeri, M. (2025). Computational basis of hierarchical and counterfactual information processing. <em>Nature Human Behaviour</em>. doi:10.1038/s41562-025-02232-3.</p>
<p>Simon, H. A. (1955). A behavioral model of rational choice. <em>Quarterly Journal of Economics</em>, 69(1), 99-118.</p>
<p>Van Hoeck, N., Ma, N., Ampe, L., Baetens, K., Vandekerckhove, M., &amp; Van Overwalle, F. (2013). Counterfactual thinking: An fMRI study on changing the past for a better future. <em>Social Cognitive and Affective Neuroscience</em>, 8(5), 556-564.</p>
<p>Van Hoeck, N., Watson, P. D., &amp; Barbey, A. K. (2015). Cognitive neuroscience of human counterfactual reasoning. <em>Frontiers in Human Neuroscience</em>, 9, 420.</p>
<p>Zador, A., Escola, S., Richards, B., et al. (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. <em>Nature Communications</em>, 14, 1597.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>PAPER ALERT: A New Model for Mediodorsal-Prefrontal Interactions</title>
		<link>https://michaelhalassa.net/paper-alert-a-new-model-for-mediodorsal-prefrontal-interactions/</link>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Sun, 13 Apr 2025 16:23:54 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Mediodorsal thalamus]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Prefrontal cortex]]></category>
		<category><![CDATA[Schizophrenia research]]></category>
		<category><![CDATA[Thalamocortical circuits]]></category>
		<category><![CDATA[Working memory]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=749</guid>

					<description><![CDATA[New Nature Computational Model reveals how the mediodorsal thalamus gates prefrontal cortex signals. Validated by Halassa Lab data, this advances schizophrenia and cognitive flexibility research.]]></description>
										<content:encoded><![CDATA[<p style="font-weight: 400">In our ongoing quest to understand how the brain enables flexible cognition, the mediodorsal (MD) thalamus and its dialogue with the prefrontal cortex (PFC) have emerged as central players. Following a series of modeling papers from our lab—including Wei-Long Zheng’s recent <em>Nature Communications</em> work on thalamocortical inference—we now have another exciting advance to share. A new study led by <strong>Sage Chen’s lab at NYU</strong> and published in <em>Nature Communications</em> proposes a <strong>computational model of MD-PFC interactions</strong>, offering fresh insights into how these circuits support adaptive decision-making.</p>
<p style="font-weight: 400">This collaborative work is driven by a burning question we have: <em>Why is the brain wired this way?</em> Why does the thalamus, nestled deep in the forebrain and reciprocally connected to cortex, play such a critical role in cognition? Our empirical work over the past decade has dissected thalamocortical dynamics in behaving animals, and our computational work including critical collaborations have helped us formalize these findings into testable frameworks. Sage’s new paper is a natural extension of this synergy—and with empirical support from our lab (spearheaded by <strong>postdoc Arghya Mukherjee</strong>), it opens new doors for exploration.</p>
<h2 style="font-weight: 400"><strong>Key Advances in the New Model</strong></h2>
<ol style="font-weight: 400">
<li><strong>The MD Thalamus as a Dynamic Router</strong><br />
The study presents the MD thalamus not just as a passive relay, but as an <strong>active switchboard</strong> that flexibly gates information to the PFC based on task demands. This aligns with our lab’s empirical observations that thalamic neurons selectively amplify sensory inputs or internal signals depending on behavioral context.</li>
<li><strong>Task-Dependent Cortical Prioritization</strong><br />
The model captures how the MD thalamus <strong>biases PFC representations</strong>—for example, emphasizing sensory cues during perceptual decisions versus memory traces during recall. This mirrors findings from our 2018 (<em>Rikhye, Gilra &amp; Halassa</em>) and 2022 (<em>Hummos et al.</em>) models, where thalamic input helped partition PFC activity to avoid interference across tasks.</li>
<li><strong>Bridging Theory and Experiment</strong><br />
Crucially, the model’s predictions were tested with <em>in vivo</em> data from our lab, reinforcing its biological plausibility. This back-and-forth between modeling and physiology is a hallmark of our approach, exemplified in Wei-Long Zheng’s 2024 study, where a thalamocortical RNN outperformed standard models in rapid inference tasks.</li>
</ol>
<h2 style="font-weight: 400"><strong>How This Drives Our Empirical Work Forward</strong></h2>
<ol style="font-weight: 400">
<li><strong>New Experiments to Test Gating Mechanisms</strong><br />
The model proposes specific thalamocortical connectivity rules for information routing. We’re now designing experiments to probe these mechanisms using <strong>optogenetics, electrophysiology, and imaging</strong>—asking how MD neurons dynamically recruit PFC microcircuits during task switching.</li>
<li><strong>Linking to Schizophrenia-Relevant Dysfunction</strong><br />
Disrupted thalamocortical gating is implicated in schizophrenia. By refining Sage’s model with disease-relevant perturbations (e.g., thalamic silencing), we aim to pinpoint how maladaptive routing contributes to cognitive inflexibility.</li>
<li><strong>The Next Generation of NeuroAI Models</strong><br />
Just as Wei-Long’s hybrid RNN incorporated biological constraints (e.g., thalamic reticular inhibition), future iterations of Sage’s model could integrate our latest empirical data—creating a virtuous cycle between theory and experiment.</li>
</ol>
<h2 style="font-weight: 400"><strong>The Bigger Picture: A Decade of Thalamocortical ModelingCognitive flexibility</strong></h2>
<p style="font-weight: 400">This paper is the latest in a line of collaborative efforts to formalize MD-PFC interactions:</p>
<ul style="font-weight: 400">
<li><strong>Rikhye, Gilra &amp; Halassa (2018)</strong>: Showed thalamus mitigates &#8220;catastrophic forgetting&#8221; in PFC.</li>
<li><strong>Hummos et al. (2022)</strong>: Derived a cortico-thalamic learning rule that compresses task context.</li>
<li><strong>Zheng et al. (2024)</strong>: Demonstrated thalamus enables rapid inference and multi-task performance.</li>
<li><strong>Zhang, X. et al. (2025):</strong> extends this to hierarchical reasoning and handling multiple forms of uncertainty.</li>
</ul>
<p style="font-weight: 400">Together, these studies underscore the thalamus’s role as a <strong>locus of cognitive flexibility</strong>—a theme Sage’s work now extends with elegant mechanistic detail.</p>
<p style="font-weight: 400"><strong>Looking Ahead</strong></p>
<p style="font-weight: 400">As NeuroAI gains momentum (evidenced by the 2024 Physics Nobel for foundational neural network work), our lab remains committed to <strong>grounding computational advances in biological reality</strong>. Sage’s model not only validates our empirical findings but also charts a course for future work—one where theory and experiment co-evolve to unravel the thalamus’s secrets.</p>
<p style="font-weight: 400">For those who missed it, revisit our blog on Wei-Long Zheng’s paper here, and stay tuned as we put these models to the test!</p>
<p style="font-weight: 400">Reference:</p>
<p style="font-weight: 400">Zhang, X., Mukherjee, A., Halassa, M. M., &amp; Chen, Z. S. (2025). Mediodorsal thalamus regulates task uncertainty to enable cognitive flexibility. Nature communications, 16(1), 2640. <a href="https://doi.org/10.1038/s41467-025-58011-1" target="_blank" rel="noopener">https://doi.org/10.1038/s41467-025-58011-1</a></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
