<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Michael Halassa | Science</title>
	<atom:link href="https://michaelhalassa.net/feed/" rel="self" type="application/rss+xml" />
	<link>https://michaelhalassa.net</link>
	<description>Just another Darin Hardy Site Sites site</description>
	<lastBuildDate>Tue, 12 May 2026 00:18:30 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>The Quiet War Between Abstraction and Detail</title>
		<link>https://michaelhalassa.net/the-quiet-war-between-abstraction-and-detail/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 22:31:52 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Cognitive Processing]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[brain]]></category>
		<category><![CDATA[cognitive flexibility]]></category>
		<category><![CDATA[Cognitive Science]]></category>
		<category><![CDATA[neurons]]></category>
		<category><![CDATA[RNN]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=798</guid>

					<description><![CDATA[https://michaelhalassa.substack.com/p/the-quiet-war-between-abstraction Every time you walk into a new restaurant, your brain solves an invisible problem: Which parts of this experience are specific to this place (the menu, the layout, the staff) and which parts are general rules you can transfer (how to order, where to sit, when to pay)? Extract too much shared structure and you’ll confidently [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="main" class="main typography use-theme-bg">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div class="main-menu" data-testid="navbar">
<div class="mainMenuContent-DME8DR">
<div class="pencraft pc-display-flex pc-gap-12 pc-paddingLeft-20 pc-paddingRight-20 pc-justifyContent-space-between pc-alignItems-center pc-reset border-bottom-detail-k1F6C4 topBar-pIF0J1">
<div class="titleContainer-DJYq5v">https://michaelhalassa.substack.com/p/the-quiet-war-between-abstraction</div>
</div>
</div>
</div>
</div>
<div>
<div class="single-post-container" role="main" aria-label="Post">
<div class="container">
<div class="single-post">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<article class="typography newsletter-post post">
<div>
<div class="available-content">
<div class="body markup" dir="auto">
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack"></div>
</figure>
</div>
<p>Every time you walk into a new restaurant, your brain solves an invisible problem: Which parts of this experience are specific to <em>this place</em> (the menu, the layout, the staff) and which parts are general rules you can transfer (how to order, where to sit, when to pay)? Extract too much shared structure and you’ll confidently walk to the wrong counter. Protect too many specific details and you’ll fumble through every new restaurant like it’s your first.</p>
<p>This balance, between learning patterns that transfer and preserving details that matter, appears fundamental to how we navigate a world where some things repeat (traffic patterns, social norms, the physics of thrown objects) and some things are unique (this particular intersection floods, this friend needs space when upset, this specific mushroom will kill you).</p>
<p>The capacity for structural learning, extracting regularities that apply across situations, may be one of the defining features of human cognition. When a child learns that adding “ed” creates past tense, they’ve discovered a rule that applies to thousands of verbs they’ve never encountered. When you recognize that a new coworker uses the same conflict-avoidant communication style as your sibling, you’ve extracted a pattern that predicts future interactions. The alternative, learning every situation as a unique instance requiring its own solution, would be computationally intractable.</p>
<p>But structural learning creates a fundamental tension. The same cognitive machinery that lets you rapidly transfer knowledge to new situations may overwrite what you learned before. Learn French after Spanish and you might start mixing verb conjugations. Update your mental model of how your boss communicates and you might misremember what they actually said last month. The brain must somehow balance the benefits of generalization against the risk of interference.</p>
<h2 class="header-anchor-post">A Task That Makes Strategies Visible</h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§a-task-that-makes-strategies-visible" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>A new paper in <a href="https://www.nature.com/articles/s41562-025-02318-y" rel="noopener" target="_blank">Nature Human Behaviour</a> by Eleanor Holton, Chris Summerfield, and colleagues has developed an elegant way to observe these competing strategies in action. The design is deceptively simple: participants learned to map fictional plants to locations on a circular planet, with separate locations for summer and winter. The key structural feature is that within each set of plants, there was a consistent angular rule. Winter locations were always a fixed offset from summer locations (say, 120 degrees clockwise).</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!CR4L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 424w, https://substackcdn.com/image/fetch/$s_!CR4L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 848w, https://substackcdn.com/image/fetch/$s_!CR4L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 1272w, https://substackcdn.com/image/fetch/$s_!CR4L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!CR4L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!CR4L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 424w, https://substackcdn.com/image/fetch/$s_!CR4L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 848w, https://substackcdn.com/image/fetch/$s_!CR4L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 1272w, https://substackcdn.com/image/fetch/$s_!CR4L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d 736c 4181 895f" width="764" height="380" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/18f7458d-736c-4181-895f-27288dc60711_764x380.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:380,&quot;width&quot;:764,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:94963,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/178980901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18f7458d-736c-4181-895f-27288dc60711_764x380.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Quiet War Between Abstraction and Detail 8"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<p>Participants first learned one set of six plants (Task A) over ten repetitions.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!EscE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 424w, https://substackcdn.com/image/fetch/$s_!EscE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 848w, https://substackcdn.com/image/fetch/$s_!EscE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 1272w, https://substackcdn.com/image/fetch/$s_!EscE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!EscE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!EscE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 424w, https://substackcdn.com/image/fetch/$s_!EscE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 848w, https://substackcdn.com/image/fetch/$s_!EscE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 1272w, https://substackcdn.com/image/fetch/$s_!EscE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cf15183 5eee 48d0 9bf5" width="409" height="402" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3cf15183-5eee-48d0-9bf5-2f72fe67394a_409x402.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:402,&quot;width&quot;:409,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:108293,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/178980901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5fbfc8db-7cb3-4cde-893b-23dd4939a7c8_410x415.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Quiet War Between Abstraction and Detail 9"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<p>Then they encountered six entirely new plants (Task B) with their own summer-winter rule, either identical to Task A, shifted by 30 degrees, or rotated 180 degrees.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!namp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 424w, https://substackcdn.com/image/fetch/$s_!namp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 848w, https://substackcdn.com/image/fetch/$s_!namp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 1272w, https://substackcdn.com/image/fetch/$s_!namp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!namp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!namp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 424w, https://substackcdn.com/image/fetch/$s_!namp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 848w, https://substackcdn.com/image/fetch/$s_!namp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 1272w, https://substackcdn.com/image/fetch/$s_!namp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7 4a02 4a22 8404" width="577" height="384" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:384,&quot;width&quot;:577,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:92434,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/178980901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06ba50c7-4a02-4a22-8404-8c1b94299982_577x384.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Quiet War Between Abstraction and Detail 10"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<p>After learning Task B, they returned to the original Task A plants, but this time received feedback only for summer locations. This allowed the team to observe what rule participants spontaneously applied when retested.</p>
<p>The experimental design cleverly separates two phenomena that usually travel together. Transfer refers to how much knowledge of rule A accelerates learning of rule B. If you immediately apply the 120-degree rule to new plants, your initial performance on Task B should be good (when the rule is similar) or systematically biased (when the rule differs). Interference refers to whether learning rule B corrupts your memory of rule A. When retested on the original plants, do you still apply rule A correctly, or do you now mistakenly use rule B?</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!9pIa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 424w, https://substackcdn.com/image/fetch/$s_!9pIa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 848w, https://substackcdn.com/image/fetch/$s_!9pIa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 1272w, https://substackcdn.com/image/fetch/$s_!9pIa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!9pIa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!9pIa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 424w, https://substackcdn.com/image/fetch/$s_!9pIa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 848w, https://substackcdn.com/image/fetch/$s_!9pIa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 1272w, https://substackcdn.com/image/fetch/$s_!9pIa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251 f4b9 4a70 a840" width="1101" height="339" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:339,&quot;width&quot;:1101,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:180396,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/178980901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11374251-f4b9-4a70-a840-7098e5ab1024_1101x339.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Quiet War Between Abstraction and Detail 11"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<p>In most learning paradigms, these processes are confounded. Here, the structure makes different algorithmic strategies visible through their distinct signatures across transfer and interference conditions.</p>
<h2 class="header-anchor-post">Heterogeneity Hidden Beneath Averaged Performance</h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§heterogeneity-hidden-beneath-averaged-performance" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>There are many interesting things about this paper (and some things that require taking with a grain of salt that I will point out in due course), but one big takeaway is how well individual differences in strategy choice was revealed by this work.</p>
<p>If you average across all participants (which many people routinely do), behavior would almost certainly appear to be relatively uniform. This is because people learn both tasks to high accuracy. Mean performance in Task B appears similar across individuals.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!8NBv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 424w, https://substackcdn.com/image/fetch/$s_!8NBv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 848w, https://substackcdn.com/image/fetch/$s_!8NBv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 1272w, https://substackcdn.com/image/fetch/$s_!8NBv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!8NBv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!8NBv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 424w, https://substackcdn.com/image/fetch/$s_!8NBv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 848w, https://substackcdn.com/image/fetch/$s_!8NBv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 1272w, https://substackcdn.com/image/fetch/$s_!8NBv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4 1977 45c3 aafb" width="567" height="715" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:715,&quot;width&quot;:567,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:138125,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/178980901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe08ee8d4-1977-45c3-aafb-6d477b96eb02_567x715.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Quiet War Between Abstraction and Detail 12"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<p>But beneath this averaged regularity lies what appears to be substantial heterogeneity in <em>how</em> people solve the task. This becomes most visible when the rules are similar enough to create genuine ambiguity (the 30-degree shift condition, though patterns may exist in the far condition too). The Near condition participants seem to split into two distinct behavioral profiles:</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!iYtD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 424w, https://substackcdn.com/image/fetch/$s_!iYtD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 848w, https://substackcdn.com/image/fetch/$s_!iYtD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 1272w, https://substackcdn.com/image/fetch/$s_!iYtD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!iYtD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!iYtD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 424w, https://substackcdn.com/image/fetch/$s_!iYtD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 848w, https://substackcdn.com/image/fetch/$s_!iYtD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 1272w, https://substackcdn.com/image/fetch/$s_!iYtD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39 8763 4767 b8de" width="609" height="749" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2ef51e39-8763-4767-b8de-7857616a2353_609x749.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:749,&quot;width&quot;:609,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:133381,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/178980901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ef51e39-8763-4767-b8de-7857616a2353_609x749.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Quiet War Between Abstraction and Detail 13"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<p>Some participants (called “lumpers”) showed high transfer but high interference. They immediately benefited from rule A when learning rule B, suggesting they were applying the previous rule to new stimuli. But when retested on task A, they were more likely to apply rule B, apparently overwriting their memory of the original rule.</p>
<p>Other participants (“splitters”) showed the opposite pattern: low transfer but low interference. They treated task B as effectively novel, gaining little immediate advantage from prior learning, and made many more mistakes upon transitioning (higher switching cost). But they maintained rule A better at retest, showing no contamination from the recently learned rule B.</p>
<p>This dissociation extended across multiple behavioral measures beyond the primary transfer and interference metrics. Lumpers appeared to generalize the rule better to held-out stimuli they’d never received feedback on, applying the angular relationship to infer correct responses. But they performed worse on memory for unique stimulus features, specifically the summer locations that had to be memorized rather than inferred from a rule. Splitters showed what looks like the mirror pattern: better memory for unique features, poorer rule-based generalization.</p>
<p>The groups even differed on explicit temporal memory tested at the end of the experiment. Splitters were better at reporting which half of the study they’d first encountered each stimulus in, as if they maintained more distinct representations of the two task contexts.</p>
<p>These differences appear to reflect distinct computational strategies rather than differences in overall ability. Splitters actually outperformed lumpers on some measures (the unique feature memory). Both groups achieved high final accuracy. These look like fundamentally different approaches to solving the same problem, each with complementary strengths and opposite vulnerability profiles.</p>
<p>An important unresolved question is whether these represent stable individual traits or context-dependent strategies. Does someone who lumps in this task lump everywhere? Or do people flexibly shift between strategies based on task structure, recent experience, or environmental statistics? The study design can’t answer this, and it’s a genuinely significant open issue for understanding what these behavioral patterns mean.</p>
<h2 class="header-anchor-post">Formalizing the Trade-off Through Neural Networks</h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§formalizing-the-trade-off-through-neural-networks" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>One of the interesting aspects of this work (and <a href="https://www.psy.ox.ac.uk/people/christopher-summerfield" rel="noopener" target="_blank">Summerfield’s work</a> more generally) is the use of AI architectures to gain insight into the human mind (and sometimes the brain too).</p>
<p>In this study, the authors used a surprisingly simple neural architecture, a two-layered feedforward network to make some inferences about individual differences in cognitive strategy usage. By manipulating the scale of initial weights, the authors could guide networks toward “rich” (small initial weights leading to structured, low-dimensional representations) or “lazy” (large initial weights leading to high-dimensional, task-agnostic representations) learning regimes.</p>
<p>Networks in the rich regime captured what looks like the lumper behavioral profile: high transfer to Task B, high interference when retested on Task A, strong rule generalization to held-out stimuli, poor unique feature memory, and collapsed representations of the two tasks in neural space (measured via principal angles between task subspaces).</p>
<p>Networks in the lazy regime mirrored what appears to be the splitter pattern on every measure: low transfer, low interference, poor generalization, better unique feature memory, and maintenance of orthogonal representations for the two tasks.</p>
<p>This computational modeling makes the underlying trade-off explicit. Low-dimensional structured representations might enable efficient generalization by extracting shared rules, but precisely because representations are shared across tasks, new learning could overwrite old knowledge. High-dimensional representations might maintain separability between tasks, protecting against interference, but at the potential cost of failing to extract transferable structure.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!syrf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 424w, https://substackcdn.com/image/fetch/$s_!syrf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 848w, https://substackcdn.com/image/fetch/$s_!syrf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 1272w, https://substackcdn.com/image/fetch/$s_!syrf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!syrf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!syrf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 424w, https://substackcdn.com/image/fetch/$s_!syrf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 848w, https://substackcdn.com/image/fetch/$s_!syrf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 1272w, https://substackcdn.com/image/fetch/$s_!syrf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846 c90e 4efa 8866" width="1290" height="739" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:739,&quot;width&quot;:1290,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:292792,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/178980901?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faec94846-c90e-4efa-8866-6ee6903159e5_1290x739.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Quiet War Between Abstraction and Detail 14"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<p>The value of this modeling approach is that it formalizes what could otherwise remain a vague intuition about competing strategies. The networks make testable predictions: if someone shows high interference, they should also show better generalization within tasks. If they maintain orthogonal task representations, they should show poor transfer but preserved memory.</p>
<p>But we need to be careful about how far we extend the analogy. Having interacted with many humans, I can confirm that none are a two-layered neural networks trained via gradient descent on a single task. How people implement rapid learning likely involves multiple interacting brain systems: prefrontal cortex implementing gating policies that determine when to update versus maintain representations, hippocampus providing rapid contextualization that could separate task episodes, thalamocortical circuits routing information through different processing channels based on task demands. Our behavior, as individuals, probably emerges from how each of our systems interact and are weighted together, something like a mixture of experts where different computational solutions contribute to the final output. Also, the rich and lazy networks are unlikely to describe individual neural systems wholesale, instead, they demonstrate a computational principle that illustrate the trade-off at work (although who knows, maybe each of our brains has a mixture of these principles at work).</p>
<p>The neural network analysis also reveals something interesting about task similarity and strategy visibility. In the Same condition (where both tasks use identical rules), the distinction between lumpers and splitters may not matter much. Everyone can apply the same rule to new stimuli without cost. In the Far condition (180-degree shift), the rules are different enough that most people might naturally treat them as separate tasks, though there may still be some heterogeneity that’s less pronounced. It’s in the Near condition (30-degree shift) where the ambiguity forces different strategies into stark relief, creating the bimodal distribution.</p>
<p>This suggests that algorithmic heterogeneity in how people approach learning might be widespread but often invisible, only becoming apparent when task structure creates the right conditions to pull different strategies apart.</p>
<h2 class="header-anchor-post">Environmental Contingency and the Absence of a Single Optimal Strategy</h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§environmental-contingency-and-the-absence-of-a-single-optimal-strategy" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>The environmental context matters critically for which strategy succeeds. Lumping might dominate when the world has stable structure worth extracting. If rules genuinely repeat across contexts, you learn faster by recognizing and applying patterns. The cost of occasional interference could be outweighed by the acceleration in learning new tasks that share structure with old ones.</p>
<p>Splitting might win when rules change frequently or when maintaining distinct memories is crucial. If what you learned before is often misleading rather than helpful, protecting each memory from contamination becomes more valuable than speed of transfer. The cost of slow learning could be justified by the accuracy of what you retain.</p>
<p>There may be no single “correct” strategy. The optimal approach likely depends on the statistics of the environment you’re navigating. A world with high regularity and stable rules rewards generalization. A world with frequent rule changes or high context-specificity rewards separation and protection of distinct memories. There is also the possibility that luck plays a role; what you just encountered may predispose you to lump or split depending on how successful you just were.</p>
<p>This raises an interesting possibility about cognitive diversity. Rather than representing noise around some ideal cognitive architecture, the coexistence of lumpers and splitters might reflect adaptation to environmental variability. A population containing both strategies might outperform a homogeneous population, with pattern-extractors thriving in stable domains and detail-preservers succeeding in volatile ones. Different algorithmic profiles suited to different ecological niches.</p>
<p>This interpretation is speculative, but it suggests we might want to think about individual differences in learning strategies as positions on a trade-off curve that evolution or development has explored.</p>
<h2 class="header-anchor-post">Relevance for Understanding Disorders</h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§relevance-for-understanding-disorders" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>This level of behavioral dissection seems important for understanding psychiatric heterogeneity. If we only measured final task accuracy, lumpers and splitters would be indistinguishable. Standard neuropsychological testing focused on whether performance is intact or impaired would miss this entirely. It’s only by examining the <em>pattern</em> of performance across multiple measures (transfer, interference, generalization, unique feature memory, temporal context) that the different strategies become visible.</p>
<p>This has potentially important implications for how we study psychiatric conditions. Consider what we know about cognitive function in schizophrenia. Working memory deficits are consistently documented. Counterfactual reasoning appears impaired. Context processing shows abnormalities. But we typically describe these as simple deficits, performance falling below some normative threshold.</p>
<p>What if some of this heterogeneity reflects people navigating trade-offs differently, perhaps forced toward one extreme by underlying capacity constraints? Someone with severe working memory limitations might be pushed toward splitting strategies, unable to maintain the flexible representations needed for successful generalization. Alternatively, they might be pushed toward excessive lumping, overgeneralizing because they can’t maintain distinct context representations. Or the computational machinery for balancing these strategies might be disrupted in ways that don’t map onto the healthy spectrum at all.</p>
<p>Without tasks that can behaviorally dissect these possibilities, separating transfer from interference, generalization from discrimination, rule application from memory for specifics, we can’t distinguish these accounts. We end up with general statements about “cognitive deficits” when we should be asking about specific algorithmic profiles and how they interact with task demands.</p>
<p>The implications here concern the level of analysis we need. Detailed statistics of behavior, comparison to normative computational models when available, careful dissection of performance patterns across conditions that pull different strategies apart.</p>
<p>And we should remember the limitations of the analogy: people aren’t one big neural network. Multiple brain systems (prefrontal gating, hippocampal context coding, thalamic routing) likely contribute to how we handle sequential learning. These computational principles might illuminate trade-offs and formalize what behavioral patterns we should look for, but the implementation almost certainly involves coordination across systems rather than a single learning mechanism. The behavioral signature we observe is likely the weighted output of this complex architecture.</p>
<h2 class="header-anchor-post">Emphasis on Methodological Insight</h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§emphasis-on-methodological-insight" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>The methodological contribution here may be as important as the empirical findings. Averaged behavior can look normal, even optimal, while masking fundamentally different computational strategies operating beneath the surface. Standard cognitive assessments can show intact performance while missing the algorithmic heterogeneity that might matter for understanding vulnerability, predicting treatment response, or matching individuals to environments.</p>
<p>The sequential learning paradigm with separate measures of transfer and interference provides one example of how to pull these strategies apart. The neural network modeling provides a formal framework for understanding what different patterns of performance might mean. The combination suggests a path forward for computational psychiatry that goes beyond asking whether performance is impaired.</p>
<p>The next generation of this work might ask: Which specific algorithmic trade-offs are being navigated differently? What behavioral signatures reveal underlying strategy? How do these computational profiles interact with environmental demands? When does being a lumper become maladaptive, and when does being a splitter limit learning?</p>
<p>The demonstration that careful behavioral dissection can reveal hidden heterogeneity in how people learn suggests we might be missing similar structure in other domains by averaging too quickly and testing too coarsely.</p>
<h2 class="header-anchor-post">Synthesis: What We Learn About Learning</h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§synthesis-what-we-learn-about-learning" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>The capacity to abstract while preserving details represents a fundamental computational challenge for any learning system, biological or artificial. The tension appears intrinsic to intelligence itself.</p>
<p>What optimal looks like depends entirely on environmental structure. Stable worlds with repeating patterns reward those who extract and apply rules quickly. Volatile worlds where yesterday’s pattern misleads today reward those who maintain distinct memories and avoid overgeneralization. The problem itself changes based on context, making any single solution inadequate.</p>
<p>This has profound implications for how we think about cognitive diversity. Individual differences in learning strategies might represent different positions on a fundamental trade-off curve, possibly shaped by recent experience, current capacity constraints, or the statistics of environments people have navigated. Cognitive styles might reflect computational strategies. Apparent deficits might be extreme positions on trade-offs that have no objectively correct answer.</p>
<p>The biological implementation is almost certainly a weighted mixture of multiple brain systems coordinating to produce the behavior we observe. Disruption might affect these systems differently, push trade-offs toward extremes, or create computational patterns that don’t exist in healthy populations at all.</p>
<p>The brain’s solution to the quiet war between abstraction and detail appears to be “it depends on the brain and it depends on the world.” Understanding both dependencies seems necessary for making sense of how learning works, why it sometimes fails, and what interventions might actually help.</p>
<p>&nbsp;</p>
<div>
<hr />
</div>
<p><em>Work like this only matters if it reaches people wondering about these questions. If you found value here, consider sharing it with your community. Subscribe to michaelhalassa.substack.com if you want to see where this type of thinking and analysis goes.</em></p>
</div>
</div>
</div>
</article>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="pencraft pc-display-contents pc-reset pubAccentTheme-rgl9Hv"></div>
<div tabindex="-1" role="region" aria-label="Notifications (F8)"></div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Currency of the Mind</title>
		<link>https://michaelhalassa.net/the-currency-of-the-mind/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 22:22:43 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Cognitive Processing]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[cingulate cortex]]></category>
		<category><![CDATA[cognitive flexibility]]></category>
		<category><![CDATA[cognitive processing]]></category>
		<category><![CDATA[orbitofrontal cortex]]></category>
		<category><![CDATA[value computation]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=794</guid>

					<description><![CDATA[How the brain constructs the values that guide everyday decisions reveals one of neuroscience’s most fascinating puzzles. Think about it: your brain adds and subtracts quantities that share no common unit! It can add morning light through kitchen windows to forty minutes in traffic, subtract image and status from a car’s reliability and comfort. These [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="main" class="main typography use-theme-bg">
<div class="single-post-container" role="main" aria-label="Post">
<div class="container">
<div class="single-post">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<article class="typography newsletter-post post">
<div class="post-header" role="region" aria-label="Post header">
<h3 class="subtitle subtitle-HEEcLo" dir="auto"></h3>
<div class="pencraft pc-display-flex pc-flexDirection-column pc-paddingBottom-16 pc-reset" role="region" aria-label="Post UFI">
<div class="pencraft pc-display-flex pc-paddingTop-16 pc-paddingBottom-16 pc-justifyContent-space-between pc-alignItems-center pc-reset">
<div class="pencraft pc-display-flex pc-gap-12 pc-alignItems-center pc-reset byline-wrapper">
<div class="pencraft pc-display-flex pc-reset">
<div class="pencraft pc-display-flex pc-flexDirection-row pc-gap-8 pc-alignItems-center pc-justifyContent-flex-start pc-reset"></div>
</div>
</div>
</div>
</div>
</div>
<div>
<div class="available-content">
<div class="body markup" dir="auto">
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p>How the brain constructs the values that guide everyday decisions reveals one of neuroscience’s most fascinating puzzles. Think about it: your brain adds and subtracts quantities that share no common unit! It can add morning light through kitchen windows to forty minutes in traffic, subtract image and status from a car’s reliability and comfort. These things exist in completely different dimensions (light, time, dollars, social signals) yet somehow the brain is constantly adding and subtracting them when making decisions.</p>
<p>Let’s take buying a home as a concrete example. On paper it looks like a financial transaction, but in practice it’s a clash of incomparable currencies. Square footage gets weighed against school districts, the energy of a neighborhood against the stability of an investment. Walk through one house and you can already imagine your life there, until you realize it means your partner endures an extra hour of daily commute. Somewhere in this mix of clear measurements and ones that are hard to describe, the brain assembles a decision.</p>
<p>To make matters more complicated, think about how volatile our internal value estimates can be. During COVID, when daily commutes vanished, the value of space ballooned, potentially trumping distance; the same house that once felt impractical now seemed like a refuge. A new context can make ostensibly identical attributes exhibit radically different valuations.</p>
<p>The types of contextual changes that shift valuation are also themselves diverse. A genuine Monet may sell for $70 million. A forgery, indistinguishable to the eye and identical to anyone but the equipment of an art authenticator, might fetch $5,000. Same paint, same canvas, same aesthetic experience. Yet the (inferred) backstory behind it transforms its value by four orders of magnitude.</p>
<p>This is the computational puzzle at the heart of value-based decision-making: how does the brain make incomparable things comparable? What neural mechanism allows attributes measured in completely different dimensions (light, time, dollars, authenticity) to compete on the same playing field? And how does this mechanism remain stable enough to produce coherent choices yet flexible enough to radically reweight those same attributes when context shifts?</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!nLGE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 424w, https://substackcdn.com/image/fetch/$s_!nLGE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 848w, https://substackcdn.com/image/fetch/$s_!nLGE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 1272w, https://substackcdn.com/image/fetch/$s_!nLGE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!nLGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!nLGE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 424w, https://substackcdn.com/image/fetch/$s_!nLGE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 848w, https://substackcdn.com/image/fetch/$s_!nLGE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 1272w, https://substackcdn.com/image/fetch/$s_!nLGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128-418a-467d-9468-66dca72958eb_1021x871.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfb27128 418a 467d 9468" width="1021" height="871" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bfb27128-418a-467d-9468-66dca72958eb_1021x871.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:871,&quot;width&quot;:1021,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Currency of the Mind 18"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p><em>From the Wall Street Journal. <a href="https://www.wsj.com/arts-culture/fine-art/a-claude-monet-water-lilies-scene-sold-for-65-5-million-6af15ce4" rel="noopener" target="_blank">Read the story here</a></em></p>
<h2 class="header-anchor-post"><strong>The Construction of Value</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-construction-of-value" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>Here’s the truth: we don’t really understand how this problem is solved. The question of how brains integrate diverse attributes into unified decisions remains one of the deep mysteries in neuroscience. Ultimately, the answer will likely come in cognitive and neural forms; two faces of the same computational coin. These perspectives will jointly explain the magic of turning morning light into a quantity that can be compared with commute time.</p>
<p>Despite the lack of a satisfying single narrative, we do know some fascinating pieces of the puzzle. Let’s start with what psychology has revealed about how value gets constructed.</p>
<p>One of the most striking findings is that arbitrary starting points can anchor our entire valuation system. Dan Ariely and colleagues demonstrated this by asking students to write down the last two digits of their Social Security number before bidding on items like wine, chocolate, and computer accessories. Students with Social Security numbers ending in 80-99 bid nearly three times more than those with numbers ending in 00-19. For a cordless keyboard, high-number students offered $56 while low-number students offered just $16. The same pattern held across all items. The initial number, though completely unrelated to the products’ worth, set an implicit scale that influenced all subsequent valuations. Once the brain latches onto a reference point, even a meaningless one, it builds an internally consistent preference structure around it.</p>
<p>Beyond arbitrary anchors, our sense of ownership profoundly alters how we value objects. In experiments by Kahneman, Knetsch, and Thaler, students were randomly given coffee mugs and then asked to name their selling price. These new “owners” demanded about $7 to part with their mugs, while students without mugs were only willing to pay about $3 to acquire one. The mug itself hadn’t changed. What changed was the relationship: once something becomes “mine,” its value doubles in my eyes. Norton, Mochon, and Ariely extended this finding by having people assemble IKEA furniture or fold origami cranes. Participants valued their own creations at nearly the same price as expert-made versions, even when their handiwork was visibly inferior. The act of creation adds a new attribute to the value calculation: the effort invested becomes part of what we’re evaluating, not just the object itself.</p>
<p>Even memory rewrites value. Daniel Kahneman and Donald Redelmeier studied patients undergoing colonoscopies. Some patients had longer procedures that ended less painfully, others had shorter ones that ended abruptly at peak pain. Counterintuitively, patients preferred the longer procedures. Their memories followed the “peak-end rule”: they judged the whole experience not by its average pain but by its worst moment and how it ended (Redelmeier &amp; Kahneman, 1996). How we remember an experience, not the experience itself, determines how we’ll value similar choices in the future.</p>
<p>And value is deeply social. In a massive online experiment, Matthew Salganik and colleagues (2006) created artificial “music markets.” When download counts were hidden, songs rose or fell on their own. But when popularity information was visible, some songs snowballed into “hits” while others languished, even though the songs were the same across markets. Popularity itself became an attribute folded into value, warping what people genuinely preferred.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!8UCK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!8UCK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!8UCK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!8UCK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!8UCK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!8UCK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!8UCK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!8UCK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!8UCK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b-4058-48d6-8ed5-50242686455c_1600x900.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc359e18b 4058 48d6 8ed5" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c359e18b-4058-48d6-8ed5-50242686455c_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Currency of the Mind 19"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p><em><a href="https://en.wikipedia.org/wiki/Daniel_Kahneman" rel="noopener" target="_blank">Daniel Kahneman</a>: A most incredible thinker and contributor to the science of decision making (also a Nobel Laureate)</em></p>
<h2 class="header-anchor-post"><strong>The Computational Challenge</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-computational-challenge" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>If value is constructed rather than retrieved from a fixed look-up table, what exactly is the brain computing? Consider what components must somehow become commensurable:</p>
<p><strong>Sensory attributes</strong>: The warmth of sunlight, the bitterness of coffee, the smoothness of silk. These arrive in different neural codes from different sensory systems, yet must be integrated into unified preferences.</p>
<p><strong>Abstract properties</strong>: Distance (20 minutes), quantity (800 square feet), probability (70% chance). The brain lacks sensory receptors for these dimensions, yet they powerfully shape value.</p>
<p><strong>Social signals</strong>: Status, belonging, reputation. That Monet carries social meaning that a forgery doesn’t. A Harvard degree signals something another school may not. These intangible attributes somehow get converted into the same currency as tangible ones.</p>
<p><strong>Temporal projections</strong>: Future pleasure, anticipated regret, imagined satisfaction. The brain must evaluate things that haven’t happened yet, experiences it can only simulate.</p>
<p><strong>Effort and ownership</strong>: The IKEA table you assembled, the garden you planted, the thesis you wrote. Investment of effort literally changes the computed value, as if the brain adds your labor to the object’s attributes.</p>
<p><strong>Comparison context</strong>: The same option valued differently depending on what else is available. That $2,500 apartment seems expensive or cheap depending entirely on the alternatives, even irrelevant ones.</p>
<p>The remarkable thing is that the brain somehow integrates these components despite their fundamental incomparability. One prominent theory suggests the brain converts everything into a “common currency”; perhaps the firing rates (or patterns) of neurons in valuation regions. But how does social status get converted into the same neural code as commute time? What algorithm transforms the warmth of sunlight into the same units as financial security? Even if there is a common currency at the point of decision, the translation process remains mysterious. When you’re choosing between jobs or homes or life partners, all these incomparable attributes must somehow become comparable. One option just feels better.</p>
<p>How does the brain perform this translation and integration? That’s what the neural machinery must somehow accomplish.</p>
<h2 class="header-anchor-post"><strong>The Neural Implementation</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-neural-implementation" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>If value is constructed, where in the brain does this happen, and in what format? Researchers often divide into two camps. One view is that the brain collapses everything into a single “common currency,” a scalar signal that can be compared across apples, Monets, and commutes. The other is that value is represented in a richer, multidimensional code, more like a map of attributes than a single number, with scalar readouts emerging only when a choice is required. The best available evidence points to the orbitofrontal cortex (OFC) and the adjacent ventromedial prefrontal cortex (vmPFC) as being central to these computations.</p>
<h3 class="header-anchor-post"><strong>OFC and vmPFC: a value map with scalar readouts</strong></h3>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§ofc-and-vmpfc-a-value-map-with-scalar-readouts" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>Work in macaques by Camillo Padoa-Schioppa and colleagues showed that OFC neurons encode subjective value in a way that is “menu invariant,” meaning the value signal for one option stays stable regardless of what it is paired against (Padoa-Schioppa &amp; Assad, 2006). This stability supports transitive choice: if juice A is valued more than B, and B more than C, then A will be valued more than C. Human neuroimaging extends this by showing vmPFC activity tracks subjective value across many domains, including money, food, and social approval (Chib et al., 2009; Bartra et al., 2013).</p>
<p>However, newer analyses suggest the OFC does not simply produce one number. Instead, its population activity preserves multiple dimensions of value, such as taste versus health or probability versus magnitude (Schuck et al., 2016; Hunt &amp; Hayden, 2017). In this view, the OFC is a “map-maker,” maintaining a structured representation of options that can be flexibly reweighted depending on context. Scalar value signals may still emerge, but only as a projection of this richer map.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset can-restack">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!vDQN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 424w, https://substackcdn.com/image/fetch/$s_!vDQN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 848w, https://substackcdn.com/image/fetch/$s_!vDQN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 1272w, https://substackcdn.com/image/fetch/$s_!vDQN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!vDQN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!vDQN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 424w, https://substackcdn.com/image/fetch/$s_!vDQN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 848w, https://substackcdn.com/image/fetch/$s_!vDQN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 1272w, https://substackcdn.com/image/fetch/$s_!vDQN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F267467db 953a 494c 82bc" width="954" height="821" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/267467db-953a-494c-82bc-32a7ce1f9c34_954x821.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:821,&quot;width&quot;:954,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="The Currency of the Mind 20"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset"></div>
</div>
</div>
</figure>
</div>
<p><em>Figure 1 of <a href="https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(24)00202-9#f0005" rel="noopener" target="_blank">Moneta et al., 2024 Trends in Neuroscience</a></em>.</p>
<h3 class="header-anchor-post"><strong>The role of dorsal prefrontal regions: shaping and acting on value</strong></h3>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-role-of-dorsal-prefrontal-regions-shaping-and-acting-on-value" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>Other prefrontal areas contribute in complementary ways. The dorsolateral prefrontal cortex (dlPFC), which is heavily involved in executive control, appears to adjust the weights assigned to different attributes. When people are instructed to prioritize health over taste, dlPFC activity reflects health information more strongly, and when told to focus on taste it reflects taste (Hare et al., 2009). Under certain conditions, the dlPFC may determine which dimensions of the map matter in a given context.</p>
<p>The dorsal anterior cingulate cortex (dACC), which monitors conflict and effort, often signals the difficulty or cost of a decision. Because it connects closely to premotor regions, it is well placed to bind abstract values to concrete actions. In challenging or effortful choices, dACC appears to integrate both the value of options and the anticipated cost of exerting control (Shenhav et al., 2013, <em>Neuron</em>).</p>
<h3 class="header-anchor-post"><strong>A distributed system</strong></h3>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§a-distributed-system" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>The emerging picture is that value does not reside in a single place or a single format. The OFC and vmPFC maintain a flexible, map-like code of options. dlPFC helps determine which axes of that map to emphasize. dACC translates the chosen value into action, especially when the choice is close or costly. Striatal circuits and dopamine signals supply the learning machinery that updates the map when outcomes deviate from expectations.</p>
<p>Understanding this distributed system may ultimately reconcile the debate between scalar and map-like coding. The brain can preserve a rich geometry of attributes while also collapsing them into a scalar readout when a choice demands it. That dual capacity may be the key to how incomparable things become comparable.</p>
<h2 class="header-anchor-post"><strong>Broader Implications</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§broader-implications" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>Understanding value as construction rather than retrieval has implications beyond individual choice. It might explain why the same economic conditions feel catastrophic or manageable depending on narrative framing. Why social media can shift entire populations’ valuation of political candidates through selective attribute highlighting. Why depression involves not just sadness but a fundamental inability to construct positive value from available attributes. Why cultural differences in what matters, individual achievement versus group harmony, lead to genuinely different experiences of the same situations.</p>
<p>The framework suggests that many societal conflicts aren’t really about different goals but about different attribute weightings. The same policy gets valued completely differently depending on whether you weigh “personal freedom” or “collective safety” more heavily. The same scientific finding gets valued differently depending on whether you weigh “economic growth” or “environmental protection.” Instead of viewing these as failures of rationality, an opposing view can be simply a different construction stemming from different weightings.</p>
<h2 class="header-anchor-post"><strong>The Mystery of Value</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-mystery-of-value" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>We started with a puzzle: how does your brain weigh the comfort of a home’s interior against commute distance, nearby amenities against your spouse’s preferences? How do incomparable attributes become comparable values?</p>
<p>The evidence points toward construction rather than retrieval from a look-up table. Ariely’s anchoring studies show that random numbers shape our valuations. The endowment effect reveals that ownership doubles perceived worth. The peak-end rule demonstrates that certain details of memory encoding determine future value. Eye-tracking shows that attention creates preference. The Monet example shows that authentication can change value by four orders of magnitude.</p>
<p>These phenomena make sense if the brain builds value from available attributes, weights them according to context and goals, and compares through some form of competition. The neural data provides pieces: OFC/vmPFC may house a flexible map-like code that may be collapsed to a common currency scalar value for comparison. dlPFC circuitry may shape which axes of that map matter and dACC circuitry may read out the winning option in preparation for action.</p>
<p>But the core mystery remains. How does the brain actually perform the integration? What computation transforms sunshine through kitchen windows into a quantity that can be weighed against minutes of commute? How do narrative attributes like “painted by Monet himself” get converted into the same currency as visual beauty or investment potential?</p>
<p>The $70 million Monet shows how backstory can outweigh brushstrokes. What we don’t yet know is how the brain pulls off the trick of weighing sunlight against commute time, or authenticity against aesthetics. That algorithm remains one of neuroscience’s deepest mysteries.</p>
<div>
<hr />
</div>
<p><em>If you enjoyed this piece, please consider subscribing to michaelhalassa.substack.com to follow along as I write about the brain, computation, and psychiatry. Some posts dive into the neuroscience of a particular mental phenomenon (like this one), while others deal with more clinically-relevant issues.</em></p>
<p><em>You can also share this post with a friend or colleague who might be curious about how our brains turn sunlight, stories, and symbols into value.</em></p>
<div>
<hr />
</div>
<p>Bibliography:</p>
<p>Ariely, D., Loewenstein, G., &amp; Prelec, D. (2003). <em>“Coherent arbitrariness”: Stable demand curves without stable preferences.</em> Quarterly Journal of Economics, 118(1), 73–106. https://doi.org/10.1162/00335530360535153</p>
<p>Bartra, O., McGuire, J. T., &amp; Kable, J. W. (2013). <em>The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value.</em> NeuroImage, 76, 412–427. https://doi.org/10.1016/j.neuroimage.2013.02.063</p>
<p>Chib, V. S., Rangel, A., Shimojo, S., &amp; O’Doherty, J. P. (2009). <em>Evidence for a common representation of decision values for dissimilar goods in human ventromedial prefrontal cortex.</em> Journal of Neuroscience, 29(39), 12315–12320. https://doi.org/10.1523/JNEUROSCI.2575-09.2009</p>
<p>Hare, T. A., Camerer, C. F., &amp; Rangel, A. (2009). <em>Self-control in decision-making involves modulation of the vmPFC valuation system.</em> Science, 324(5927), 646–648. https://doi.org/10.1126/science.1168450</p>
<p>Hunt, L. T., &amp; Hayden, B. Y. (2017). <em>A distributed, hierarchical and recurrent framework for reward-based choice.</em> Neuron, 96(2), 355–362. https://doi.org/10.1016/j.neuron.2017.09.031</p>
<p>Kahneman, D., Knetsch, J. L., &amp; Thaler, R. H. (1991). <em>Anomalies: The endowment effect, loss aversion, and status quo bias.</em> Journal of Economic Perspectives, 5(1), 193–206. https://doi.org/10.1257/jep.5.1.193</p>
<p>Kaplan, R., &amp; Friston, K. J. (2018). <em>Planning and navigation as active inference.</em> Biological Cybernetics, 112(4), 323–343. https://doi.org/10.1007/s00422-018-0753-2</p>
<p>Norton, M. I., Mochon, D., &amp; Ariely, D. (2012). <em>The IKEA effect: When labor leads to love.</em> Journal of Consumer Psychology, 22(3), 453–460. https://doi.org/10.1016/j.jcps.2011.08.002</p>
<p>Padoa-Schioppa, C., &amp; Assad, J. A. (2006). <em>Neurons in the orbitofrontal cortex encode economic value.</em> Nature Neuroscience, 9(3), 367–373. https://doi.org/10.1038/nn1726</p>
<p>Redelmeier, D. A., &amp; Kahneman, D. (1996). <em>Patients’ memories of painful medical treatments: Real-time and retrospective evaluations of two minimally invasive procedures.</em> Pain, 66(1), 3–8. https://doi.org/10.1016/0304-3959(96)02994-6</p>
<p>Salganik, M. J., Dodds, P. S., &amp; Watts, D. J. (2006). <em>Experimental study of inequality and unpredictability in an artificial cultural market.</em> Science, 311(5762), 854–856. https://doi.org/10.1126/science.1121066</p>
<p>Schuck, N. W., Cai, M. B., Wilson, R. C., &amp; Niv, Y. (2016). <em>Human orbitofrontal cortex represents a cognitive map of state space.</em> Neuron, 91(6), 1402–1412. https://doi.org/10.1016/j.neuron.2016.08.019</p>
<p>Shenhav, A., Botvinick, M. M., &amp; Cohen, J. D. (2013). <em>The expected value of control: An integrative theory of anterior cingulate cortex function.</em> Neuron, 79(2), 217–240. https://doi.org/10.1016/j.neuron.2013.07.007</p>
</div>
</div>
</div>
</article>
</div>
</div>
</div>
</div>
</div>
<div class="pencraft pc-display-contents pc-reset pubAccentTheme-rgl9Hv"></div>
<div tabindex="-1" role="region" aria-label="Notifications (F8)"></div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Time is Memory</title>
		<link>https://michaelhalassa.net/time-is-memory/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Mon, 01 Sep 2025 12:08:25 +0000</pubDate>
				<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Working memory]]></category>
		<category><![CDATA[Cognitive Research]]></category>
		<category><![CDATA[Cognitive Science]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[Memory]]></category>
		<category><![CDATA[Temporal Memory]]></category>
		<category><![CDATA[Time]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=789</guid>

					<description><![CDATA[Michael Halassa discusses how the brain may create the sense of memory and why time distortions happen in experience]]></description>
										<content:encoded><![CDATA[<p>Over the past year, I’ve found a new favorite running trail. It winds through woods, follows riverbanks, and slips through an old industrial complex. The scenery shifts constantly, broken into short, distinct segments.</p>
<p>I was surprised to discover that the run takes about an hour, almost exactly the same as my old trail from the year before. The distances are nearly identical too, which makes sense given that my pace hasn’t changed. And yet, the new trail <em>feels</em> much longer. How come?</p>
<p>The old route was simpler. It had three long, straight stretches where I could see the end from the beginning. Easy to remember, easy to chunk. The new one is nothing like that: shorter segments, sharper turns, and ever-changing backdrops. Every few minutes you’re in a completely new setting, never quite sure what’s around the bend.</p>
<p>That difference got me thinking about how we perceive time. We’ve all had those strange distortions: a memory from years ago that feels recent, or something from last week that feels impossibly distant. Time in the brain is slippery.</p>
<p>So how do we actually track it? Is there an internal clock ticking away? Probably not: decades of searching haven’t turned one up. A more likely explanation is that time is tied to how memories are organized and indexed. Let’s dig into what we actually know.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!EcUq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!EcUq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EcUq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f 3907 4569 a2f0" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2934757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F534aee2f-3907-4569-a2f0-aa97474351a8_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 21"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>How Memory Creates Time</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§how-memory-creates-time" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>The first clue comes from studying what happens when we remember. In a clever set of experiments, Olivier Jeunehomme and Arnaud D’Argembeau asked people to wear small automatic cameras while walking around a university campus. The cameras snapped photos every few seconds, creating an objective record of the experience. Later, participants were asked to verbally recall their walks while being audio-recorded.</p>
<p>The campus walks lasted around 40 minutes, but when participants replayed them aloud in memory, the descriptions only took about 5 minutes on average. That is roughly an eightfold compression of time.</p>
<p>The compression, however, was uneven. The researchers compared the recall transcripts to the time-stamped camera sequences and divided the narratives into what they called “experience units.” These were discrete remembered moments, such as buying a coffee, turning into a courtyard, or chatting with a classmate. Each unit was mapped back to the original footage so they could calculate how much real-world time it spanned.</p>
<p>The pattern was striking. Short, bounded activities with a clear goal, like making a purchase or opening a door, tended to be preserved in relatively high detail, replayed at about four to five times compression. In contrast, transitional stretches of locomotion, like walking from one building to the next, were compressed far more, sometimes by a factor of twenty or more. Long, uneventful stretches collapsed into a single unit, while activity-rich episodes retained much finer granularity.</p>
<p>These experience units appear to be the basic building blocks of episodic memory. The density of such units determines how long an episode feels in retrospect. More units per minute of clock time make for a richer memory trace and an expanded sense of duration. Fewer units create a thinner trace and a contracted sense of time.</p>
<p>Follow-up studies have highlighted the special role of event boundaries. Jeunehomme and D’Argembeau found that moments marking a change in context, such as entering a building, turning a corner, or meeting a person, were about five times more likely to be recalled than stretches in between. Boundaries act like bookmarks, segmenting the stream of experience and anchoring the flow of time in memory. These anchors not only determine what is remembered, but also shape how long the remembered experience feels.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!cw5r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!cw5r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cw5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9 ec2f 4660 b5d9" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3375647,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F307e1df9-ec2f-4660-b5d9-62ec14042e13_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 22"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>The Paradox of Event Boundaries</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-paradox-of-event-boundaries" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>Experience units and event boundaries create a fundamental paradox in how we perceive time. Bangert and colleagues (2019, 2020) ran a series of experiments in which participants watched short films of everyday activities while making timing judgments. The films were paused at different points, and participants were asked to estimate whether a brief interval, usually around five seconds, had just passed. The twist was that sometimes the interval contained an event boundary, such as finishing washing dishes and beginning to dry them, and sometimes it did not. Intervals that contained a boundary were consistently judged as shorter than otherwise identical spans without one.</p>
<p>The mechanism behind this compression may become clearer when considering what&#8217;s happening in working memory. Swallow and colleagues (2009) tracked this directly by having participants watch movie clips while objects appeared on screen, a knife during sandwich-making, a towel during dishwashing. Five seconds later, the movie would pause for a recognition test. Objects present at event boundaries were recognized significantly better than those at non-boundaries. But this enhancement came with a cost: memory for objects from just before a boundary dropped dramatically. The boundary created a barrier, making it harder to retrieve information from the previous event even though it had occurred mere seconds earlier.</p>
<p>Event Segmentation Theory, developed by Jeffrey Zacks and colleagues in 2007, provides the framework. According to their theory, event boundaries are when the brain discards its current &#8220;event model&#8221; from working memory and uploads a new one. This updating process requires attention, which leaves fewer resources available for keeping track of time. As Bangert and colleagues (2020) demonstrated using dual-task paradigms, devoting attention to updating perceptual and conceptual features of the activity left fewer attentional resources for accumulating temporal information. It&#8217;s like trying to count seconds while also solving a puzzle &#8211; each boundary forces you to solve a new puzzle, and your counting falters.</p>
<p>The paradox is that the very same boundaries that compress time during experience expand it in memory. They serve as landmarks that structure recall, making events feel more spacious in retrospect. This dual effect helps explain a familiar puzzle: why the drive home from a new place usually feels longer than the drive there. On the outbound trip, the brain is constantly updating its models: pass the gas station (boundary), turn at the intersection (boundary), merge onto the highway (boundary). Each update reduces attention for tracking duration, so the drive feels shorter while you are in it. Yet those boundaries also create anchors that expand the memory of the trip. On the return drive the route is familiar, there are fewer surprises, and the brain needs fewer updates. With less attention diverted, duration is tracked more faithfully, so the drive feels longer in the moment but compresses more in memory.</p>
<p>Bangert and colleagues (2019) also tested temporal proximity, asking participants to judge how far apart two moments in the film felt. Boundaries made items seem further apart in time, even when the objective duration was identical. In this sense, boundaries insert psychological distance between moments. They stretch the remembered timeline even while compressing the lived experience of duration.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!REqt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!REqt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!REqt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!REqt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!REqt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824 837f 43ed ad2a" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3109330,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcfe74824-837f-43ed-ad2a-21275dffbd6c_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 23"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<h2 class="header-anchor-post"><strong>The Implications</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§the-implications" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>This framework explains a wide range of everyday paradoxes. Vacations, filled with novelty, fly by while they happen but expand richly in memory. Daily routines, stripped of boundaries, drag while we live them but collapse into nothing when recalled. Clewett and Davachi (2017) argued that the ebb and flow of experience itself determines the temporal structure of memory. Lositsky and colleagues (2016) showed that the greater the number and diversity of boundaries, the more time expands in recall.</p>
<p>It explains my running puzzle. My old trail was made up of long, predictable stretches, so it generated relatively few event boundaries. My new trail, by contrast, forced segmentation at every turn: woods to riverbank, riverbank to industrial ruins, sharp corner, sudden hill, unexpected vista. Each transition became a boundary, a new chunk in memory. The clock says both trails take about an hour, but memory disagrees. The old one collapses into a few coarse segments, while the new one expands into a much longer-feeling journey.</p>
<p>The principle is simple: if you want something to feel substantial in memory, add boundaries. Change contexts, vary activities, create moments that require updates. If you want time to flow by quickly, keep it continuous and predictable.</p>
<p>But the implications go deeper than personal experience design. This mechanism may explain why time seems to accelerate as we age. Childhood is packed with firsts, each creating boundaries: first day of school, first sleepover, first kiss. Adult life, especially in stable careers and relationships, can become a series of similar days bleeding into each other. The years feel shorter not because our metabolism changes or because of some cosmic injustice, but because we&#8217;re creating fewer distinct memory segments.</p>
<p>The brain doesn&#8217;t keep time like a clock. It builds time from its internal dynamics. The elasticity of time isn&#8217;t an illusion; it&#8217;s how the mind constructs a temporal dimension from the boundaries of experience.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!6ETZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!6ETZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490 94cf 4fd0 9d0f" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3086532,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/171598378?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4730490-94cf-4fd0-9d0f-ee96c50fafdd_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Time is Memory 24"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>&nbsp;</p>
<div>
<hr />
</div>
<p><em>If you enjoyed this piece, let me know. I’d love to hear how you’ve experienced time stretching or compressing in your own life. I’ll also be following up with another post that digs into the neural substrates of time perception, exploring how brain circuits generate these distortions.</em></p>
<p><em>If you’d like to read that when it comes out, consider subscribing or sharing this piece with someone who might find it interesting.</em></p>
<div>
<hr />
</div>
<h2 class="header-anchor-post"><strong>Bibliography</strong></h2>
<div class="pencraft pc-display-flex pc-alignItems-center pc-position-absolute pc-reset header-anchor-parent">
<div class="pencraft pc-display-contents pc-reset pubTheme-yiXxQA">
<div id="§bibliography" class="pencraft pc-reset header-anchor offset-top"></div>
<p>&nbsp;</p>
</div>
</div>
<p>Bangert, A. S., Kurby, C. A., Hughes, A. S., &amp; Carrasco, O. (2019). Crossing event boundaries changes prospective perceptions of temporal length and proximity. <em>Attention, Perception, &amp; Psychophysics</em>, 81(8), 2459-2472.</p>
<p>Block, R. A., &amp; Zakay, D. (1997). Prospective and retrospective duration judgments: A meta-analytic review. <em>Psychonomic Bulletin &amp; Review</em>, 4(2), 184-197.</p>
<p>Clewett, D., &amp; Davachi, L. (2017). The ebb and flow of experience determines the temporal structure of memory. <em>Current Opinion in Behavioral Sciences</em>, 17, 186-193.</p>
<p>Jeunehomme, O., &amp; D&#8217;Argembeau, A. (2020). Event segmentation and the temporal compression of experience in episodic memory. <em>Psychological Research</em>, 84(2), 481-490.</p>
<p>Lositsky, O., Chen, J., Toker, D., Honey, C. J., Shvartsman, M., Poppenk, J. L., &#8230; &amp; Norman, K. A. (2016). Neural pattern change during encoding of a narrative predicts retrospective duration estimates. <em>eLife</em>, 5, e16070.</p>
<p>Swallow, K. M., Zacks, J. M., &amp; Abrams, R. A. (2009). Event boundaries in perception affect memory encoding and updating. <em>Journal of Experimental Psychology: General</em>, 138(2), 236-257.</p>
<p>Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S., &amp; Reynolds, J. R. (2007). Event perception: A mind-brain perspective. <em>Psychological Bulletin</em>, 133(2), 273-293.</p>
<p>Zacks, J. M., Kurby, C. A., Eisenberg, M. L., &amp; Haroutunian, N. (2011). Prediction error associated with the perceptual segmentation of naturalistic events. <em>Journal of Cognitive Neuroscience</em>, 23(12), 4057-4066.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Brain&#8217;s &#8220;What If&#8221; Engine: Why Counterfactuals Are Key to Human Intelligence</title>
		<link>https://michaelhalassa.net/counterfactuals-human-intelligence/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Sun, 03 Aug 2025 23:04:06 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Prefrontal cortex]]></category>
		<category><![CDATA[Working memory]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[Recurrent Neural Networks]]></category>
		<category><![CDATA[research paper]]></category>
		<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=785</guid>

					<description><![CDATA[Michael Halassa discusses recent work on counterfactual reasoning and its contribution to human cognition]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve always been fascinated by the kinds of thoughts we <em>don&#8217;t</em> act on. In psychiatry, they shape regret, resilience, and rumination. In neuroscience, they reveal a deep truth about how the brain handles uncertainty. Every morning when I&#8217;m running late, I catch myself thinking: &#8220;If only I&#8217;d left five minutes earlier.&#8221; It&#8217;s a fleeting thought, but it represents one of the most computationally sophisticated processes our brains perform: imagining alternative realities that never happened.</p>
<p>Every day, your brain performs millions of &#8220;what if&#8221; calculations without you even noticing. What if I had taken the other route to work? What if I hadn&#8217;t said that in the meeting? What if the ball bounces differently than expected? This capacity for <strong>counterfactual reasoning</strong>, imagining alternative realities that never actually occurred, represents one of the most sophisticated computational achievements of biological intelligence.</p>
<p>A groundbreaking new study published in <em>Nature Human Behaviour</em> by Ramadan, Tang, Watters, and Jazayeri has shed new light on why humans rely on these mentally expensive &#8220;what if&#8221; simulations, revealing computational constraints that force our brains into remarkably clever problem-solving strategies. Their findings illuminate human cognition and change how we understand intelligence itself.</p>
<h2>The Computational Mystery: Why Do We Think in &#8220;What Ifs&#8221;?</h2>
<p>From a purely computational standpoint, counterfactual reasoning seems inefficient. When facing complex decisions, optimal algorithms should simply compute the joint probability of all possible outcomes and pick the best option. So why do humans constantly engage in the seemingly wasteful exercise of imagining alternatives?</p>
<p>The answer, as Ramadan and colleagues discovered, lies in the fundamental constraints that shape how our brains process information. Using an ingenious H-maze task where participants had to track an invisible ball through branching pathways, they uncovered three critical computational bottlenecks that force human cognition into hierarchical and counterfactual strategies:</p>
<p><strong>1. Parallel Processing Bottleneck</strong>: Our brains cannot track all possible trajectories simultaneously. We must break complex problems into sequential, hierarchical steps.</p>
<p><strong>2. Counterfactual Processing Noise</strong>: When we engage in &#8220;what if&#8221; thinking, our working memory introduces noise that degrades the fidelity of these mental simulations.</p>
<p><strong>3. Rational Resource Allocation</strong>: Humans adaptively adjust their reliance on counterfactuals based on how much these mental simulations cost them.</p>
<h2>Very Clever Use of Recurrent Neural Networks in Modeling Features of the Human Mind</h2>
<p>The research reveals profound insights about intelligence itself. When Ramadan et al. created artificial neural networks and subjected them to the same computational constraints humans face, something remarkable happened: only the networks constrained by all three bottlenecks reproduced human-like behavior.</p>
<p>This finding demonstrates the power of using recurrent neural networks to model human cognition. By constraining artificial networks with the same limitations that shape human thinking, Ramadan et al. created systems that behave remarkably like people. The key insight is that RNNs can capture mental processes like hierarchical and counterfactual reasoning when they face the same computational bottlenecks humans do.</p>
<h3>Neural Architecture of Counterfactual Reasoning</h3>
<p>The neural implementation of counterfactual reasoning tells a more complex story beyond frontal control. Van Hoeck and colleagues&#8217; landmark fMRI study revealed that counterfactual thinking engages a distributed network that hijacks the brain&#8217;s episodic memory system.</p>
<p>When participants imagined &#8220;upward counterfactuals&#8221; (better outcomes for negative past events), their brains activated the same core memory network used for remembering the past and imagining the future: hippocampus, posterior cingulate, inferior parietal lobule, lateral temporal cortices, and medial prefrontal cortex.</p>
<p>What makes counterfactual reasoning computationally expensive becomes clear in this neural architecture. Counterfactual thinking recruited these memory regions more extensively than episodic past or future thinking, and additionally engaged bilateral inferior parietal lobe and posterior medial frontal cortex.</p>
<p>The extra brain activity reflects just how demanding this kind of mental juggling really is: counterfactual reasoning requires simultaneously maintaining factual and contrafactual representations while actively inhibiting the dominant factual reality.</p>
<p>The brain has evolved specialized circuitry for tracking &#8220;what might have been.&#8221; Boorman and colleagues discovered that lateral frontopolar cortex, dorsomedial frontal cortex, and posteromedial cortex form a dedicated network for encoding counterfactual choice values: tracking not just what happened, but whether alternative options might be worth choosing in the future.</p>
<p>This network operates in parallel to the ventromedial prefrontal system that tracks the value of chosen options, suggesting that the brain maintains separate computational channels for factual and counterfactual value processing.</p>
<p>Perhaps most remarkably, recent work has shown that counterfactual information fundamentally transforms how the brain codes value itself. When counterfactual outcomes are available, medial prefrontal and cingulate cortex shift from absolute to relative value coding.</p>
<p>Think of it this way: losing $10 feels terrible if you could have won $50, but feels great if you could have lost $100. The same neural outcome is processed as positive in a loss context (absence of punishment) but negative in a gain context (absence of reward).</p>
<p>This neural flexibility mirrors the adaptive computational strategies revealed in behavioral studies: the brain dynamically reconfigures its representational schemes based on available information and processing constraints.</p>
<p>These findings illuminate why counterfactual reasoning is both computationally expensive and evolutionarily preserved. The enhanced neural demands reflect genuine computational costs: maintaining multiple alternative representations, binding novel scenario elements, and managing conflict between factual and counterfactual worlds. Yet this system enables the kind of flexible, context-sensitive reasoning that allows humans to learn from paths not taken and adapt behavior based on imagined alternatives.</p>
<h2>The Bounded Rationality Renaissance</h2>
<p>These discoveries are part of a broader renaissance in understanding <strong>bounded rationality</strong>, the idea that intelligent behavior emerges not from perfect optimization, but from smart adaptations to computational limitations.</p>
<p>Herbert Simon&#8217;s revolutionary concept of bounded rationality challenged the assumptions of perfect rationality in classical economic theory, proposing instead that individuals &#8220;satisfice&#8221; (seeking good enough solutions rather than optimal ones) due to limitations in computation, time, information, and cognitive resources.</p>
<p>Simon&#8217;s work recognized that &#8220;perfectly rational decisions are often not feasible in practice because of the intractability of natural decision problems and the finite computational resources available for making them.&#8221; This insight has profound implications for both understanding human cognition and designing artificial intelligence systems.</p>
<h3>The Bigger Picture</h3>
<p>The Ramadan study reveals something profound: the cognitive strategies we think of as distinct (hierarchical reasoning, counterfactual thinking, simple optimization) actually lie along a continuum. Human intelligence dynamically shifts between these approaches based on available mental resources and task demands.</p>
<p>This has implications beyond neuroscience. If counterfactual reasoning emerges from computational constraints rather than being hardwired, it suggests these &#8220;what if&#8221; processes might be fundamental to any sufficiently complex intelligence, biological or artificial.</p>
<h2>Clinical Frontiers: When Counterfactuals Break Down</h2>
<p>From a clinical perspective, this research offers new windows into psychiatric and neurological conditions. Counterfactual reasoning depends on integrative networks for affective processing, mental simulation, and cognitive control. These are systems that are systematically altered in psychiatric illness and neurological disease.</p>
<p>Consider a patient with OCD who gets trapped in endless loops of &#8220;what if I didn&#8217;t check the door?&#8221; or someone with depression whose counterfactual thinking spirals into &#8220;if only I were different, everything would be better.&#8221; Understanding the computational basis of these patterns could lead to more targeted therapeutic approaches.</p>
<p>Patients with schizophrenia show specific deficits in counterfactual reasoning when complex non-factual elements are needed to understand social environments. By mapping how these computational processes break down, we&#8217;re gaining new tools for both diagnosis and treatment.</p>
<h2>The Bottom Line: Constraints as Features</h2>
<p>The story of counterfactual reasoning is a story about the power of constraints. What initially appears to be a computational limitation (our inability to process all information in parallel) turns out to be the very foundation of human cognitive flexibility.</p>
<p>The human brain&#8217;s &#8220;what if&#8221; engine represents an elegant solution that emerges from the interplay between computational constraints and adaptive intelligence. As we stand on the brink of artificial general intelligence, perhaps the secret lies not in building systems that can process everything at once, but systems that can gracefully adapt to the fundamental constraints that shape all intelligence.</p>
<p>The future of AI may not lie in eliminating human limitations, but in understanding why those limitations exist and what remarkable capabilities they make possible.</p>
<hr />
<p><em>This convergence of neuroscience, cognitive science, and AI represents a fundamental shift in how we understand intelligence. Rather than seeing computational constraints as problems to solve, we&#8217;re beginning to recognize them as the very features that make flexible, adaptive intelligence possible. The brain&#8217;s &#8220;what if&#8221; engine may be a blueprint for the next generation of truly intelligent machines.</em></p>
<p>The next time you wonder what might have been, remember: that question may be the very core of what makes you human.</p>
<hr />
<h2>Bibliography</h2>
<p>Boorman, E. D., Behrens, T. E., &amp; Rushworth, M. F. (2011). Counterfactual choice and learning in a neural network centered on human lateral frontopolar cortex. <em>PLoS Biology</em>, 9(6), e1001093.</p>
<p>Pischedda, D., Palminteri, S., &amp; Coricelli, G. (2020). The effect of counterfactual information on outcome value coding in medial prefrontal and cingulate cortex: From an absolute to a relative neural code. <em>Journal of Neuroscience</em>, 40(16), 3268-3277.</p>
<p>Ramadan, M., Tang, C., Watters, N., &amp; Jazayeri, M. (2025). Computational basis of hierarchical and counterfactual information processing. <em>Nature Human Behaviour</em>. doi:10.1038/s41562-025-02232-3.</p>
<p>Simon, H. A. (1955). A behavioral model of rational choice. <em>Quarterly Journal of Economics</em>, 69(1), 99-118.</p>
<p>Van Hoeck, N., Ma, N., Ampe, L., Baetens, K., Vandekerckhove, M., &amp; Van Overwalle, F. (2013). Counterfactual thinking: An fMRI study on changing the past for a better future. <em>Social Cognitive and Affective Neuroscience</em>, 8(5), 556-564.</p>
<p>Van Hoeck, N., Watson, P. D., &amp; Barbey, A. K. (2015). Cognitive neuroscience of human counterfactual reasoning. <em>Frontiers in Human Neuroscience</em>, 9, 420.</p>
<p>Zador, A., Escola, S., Richards, B., et al. (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. <em>Nature Communications</em>, 14, 1597.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Next Chapter of AI: Leveraging the Evolutionary Principles Powering Human Intelligence</title>
		<link>https://michaelhalassa.net/neuroai2025/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Thu, 17 Jul 2025 09:04:13 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Cognitive Processing]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Brain scientist]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[Recurrent Neural Networks]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=772</guid>

					<description><![CDATA[Michael Halassa explores the intersection between Neuroscience and AI (NeuroAI) highlighting research on flexible cognition.]]></description>
										<content:encoded><![CDATA[<p>A mouse can explore a new environment, find food and adapt when the rules change, all using less energy than a lightbulb. Meanwhile, our most powerful computers can solve chess and master protein folding, but still can’t walk across a messy room without crashing into a chair.</p>
<p>This contrast reveals something profound about intelligence itself and where we need to go next. As we celebrate Geoffrey Hinton and John Hopfield&#8217;s recent Nobel Prize in Physics for their foundational work on neural networks, it&#8217;s the perfect time to ask: what&#8217;s the next chapter in understanding intelligence?</p>
<p><strong>The Great Intelligence Paradox</strong></p>
<p>We&#8217;re living through what some call the &#8220;Great Intelligence Paradox.&#8221; Our most advanced computational systems can master protein folding and beat world champions at Go, tasks that require incredible sophistication. But they&#8217;re surprisingly brittle when faced with the kind of flexible, real-world intelligence that any animal takes for granted.</p>
<p>Consider this: no machine can build a nest, forage for berries, or care for young. Today&#8217;s computational systems cannot compete with the sensorimotor capabilities of a four-year old child or even simple animals. The reason isn&#8217;t that we lack computational power. It&#8217;s that we&#8217;ve been approaching intelligence from a different angle.</p>
<p>As researcher Hans Moravec put it, abstract thought &#8220;is a new trick, perhaps less than 100 thousand years old, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge.&#8221; In other words, when trying to capture natural intelligence, we&#8217;ve been focusing on the penthouse without first understanding the foundation.</p>
<p><strong>The Deep History of NeuroAI: A 70-Year Symbiosis</strong></p>
<p>This realization has sparked the emergence of NeuroAI, a field that recognizes something remarkable: evolution has already solved many of the problems we&#8217;re struggling with in artificial intelligence. But the connection between neuroscience and computing isn&#8217;t new. It can be traced to the very foundations of modern computer science itself.</p>
<p>John von Neumann&#8217;s seminal 1945 report outlining the first computer architecture (EDVAC) dedicated an entire chapter to discussing whether the proposed system was sufficiently brain-like. Remarkably, the only citation in this foundational document was to Warren McCulloch and Walter Pitts&#8217; 1943 paper, widely considered the first work on neural networks. This early cross-pollination between neuroscience and computer science set the stage for decades of mutual inspiration.</p>
<p>The relationship deepened with Frank Rosenblatt&#8217;s introduction of the perceptron in 1958. The revolutionary idea here wasn&#8217;t just that machines could learn, but that they should learn from data rather than being explicitly programmed. Rosenblatt established synaptic connections as the primary locus of learning in artificial neural networks, a concept heavily influenced by Donald Hebb&#8217;s 1949 work highlighting the importance of the synapse as the physical basis of learning and memory.</p>
<p>This neuroscience-inspired principle that synapses are the plastic elements of neural networks has remained absolutely central to modern computation. Even when Marvin Minsky and Seymour Papert&#8217;s 1969 critique of perceptrons triggered the first &#8220;neural network winter,&#8221; the core insight persisted.</p>
<p>The symbiosis between artificial and biological neural network research has produced numerous breakthrough success stories. Perhaps the most celebrated is the convolutional neural network (CNN), which powers many of today&#8217;s most successful artificial vision systems. CNNs were directly inspired by David Hubel and Torsten Wiesel&#8217;s model of the visual cortex, work that earned them a Nobel Prize more than four decades ago.</p>
<p>Another home run is reinforcement learning, which has driven groundbreaking achievements including Google DeepMind&#8217;s AlphaZero and AlphaGo. The computational principles underlying these systems mirror the dopamine-mediated learning circuits in biological brains. When a monkey reaches for a reward and receives more than expected, dopamine neurons fire in patterns that precisely match the temporal difference learning algorithms used in these game-playing systems.</p>
<p>More recently, the concept of &#8220;dropout&#8221; has gained prominence in artificial neural networks. This technique, in which individual neurons are randomly deactivated during training to prevent overfitting, draws inspiration from the brain&#8217;s use of stochastic processes. By mimicking the occasional misfiring of neurons, dropout encourages networks to develop more robust and resilient representations.</p>
<p>Critically, this relationship is truly mutualistic, not parasitic. Computational advances have revolutionized neuroscience as much as neuroscience has inspired computation. Artificial neural networks now form the backbone of state-of-the-art models of the visual cortex. The success of these models in solving complex perceptual tasks has generated new hypotheses about how biological brains might perform similar computations.</p>
<p><strong>Why Animals Are the Ultimate Intelligence Teachers</strong></p>
<p>Instead of trying to replicate what makes humans special, we should look at what makes all animals successful. These are the capabilities that have been tested and refined over 500 million years of evolution.</p>
<p>This is where Tony Zador and his colleagues propose the &#8220;embodied Turing test.&#8221; The idea is straightforward but profound: instead of asking whether computation can fool us in conversation, we should ask whether an artificial beaver can build a dam as skillfully as a real one, or whether an artificial squirrel can navigate through trees with the same agility.</p>
<p>This shift in perspective reveals three crucial capabilities that current computational systems lack:</p>
<p><strong>They Engage Their Environment</strong></p>
<p>The defining feature of animals is their ability to move around and interact with their environment in purposeful ways. It&#8217;s about understanding how actions affect the world and using that understanding to achieve goals.</p>
<p>Consider the computational challenge this represents. When you watch a cat stalking prey, you&#8217;re witnessing real-time integration of visual tracking, motor prediction, uncertainty estimation, and action selection. The cat must predict the prey&#8217;s trajectory, estimate the optimal interception point, account for its own motor delays, and continuously update its strategy as the situation evolves. This requires what computational scientists call forward models, inverse models, and optimal control, all running simultaneously in a brain that weighs 30 grams.</p>
<p>Or take nest building in birds. A Baltimore oriole weaves together hundreds of individual grass fibers, each requiring precise motor control and spatial reasoning. The bird must estimate structural integrity in real-time, adapt to varying material properties, and maintain a global architectural plan while executing thousands of local actions. No current robotic system can approach this level of sensorimotor sophistication.</p>
<p><strong>They Behave Flexibly</strong></p>
<p>Animals are born with most of the skills needed to thrive or can rapidly acquire them from limited experience, thanks to their strong foundation in real-world interaction, courtesy of evolution and development. Unlike computational systems that catastrophically fail when encountering scenarios outside their training data, animals excel at handling novel situations by drawing on their general understanding of how the world works.</p>
<p>This flexibility emerges from what neuroscientists call compositional representation. Rather than memorizing specific stimulus-response patterns, animals build internal models of causal structure that can be recombined in novel ways. A squirrel encountering an unfamiliar tree can still navigate it by applying general principles of branch mechanics, gravity, and momentum.</p>
<p>Recent work by Rajalingham and colleagues has provided a striking demonstration of this principle. They trained monkeys to play &#8220;mental Pong,&#8221; where a ball disappeared behind a barrier and the animal had to predict where it would emerge. Neural recordings from the monkeys&#8217; frontal cortex revealed that the brain was running a mental physics engine, maintaining an internal trajectory that matched physical reality even when the ball was invisible.</p>
<p>Even more remarkably, when computational systems were trained on the same task but required to infer the ball&#8217;s hidden path, they produced patterns of activity that mirrored the monkey frontal cortex. This suggests that both biological and artificial systems converge on similar computational solutions when solving similar problems, but biological systems achieve this with far greater efficiency and flexibility.</p>
<p><strong>They Compute Efficiently</strong></p>
<p>Here&#8217;s a staggering comparison that reveals the depth of the efficiency gap: training a large language model such as GPT-3 requires over 1000 megawatt-hours, enough electricity to power a small town for a day. The human brain uses about 20 watts, roughly the same as a bright light bulb.</p>
<p>This efficiency gap points to fundamentally different computational principles. Biological circuits operate in a regime where spikes are sparse and energy-efficient, using asynchronous communication protocols that bear little resemblance to the synchronous, dense matrix operations that characterize current computational systems.</p>
<p>The brain achieves this efficiency through several key innovations. First, it uses event-driven computation, where neurons only consume energy when they have something important to communicate. Second, it employs local learning rules that don&#8217;t require global coordination or backpropagation of error signals. Third, it multiplexes different types of information in the same circuits, allowing the same neural hardware to support multiple functions depending on context.</p>
<p>Recent advances in neuromorphic engineering are beginning to capture some of these principles. Intel&#8217;s Loihi chip and IBM&#8217;s TrueNorth processor implement spiking neural networks that dramatically reduce power consumption for certain tasks. But we&#8217;re still far from achieving the full computational elegance of biological systems.</p>
<p><strong>Our Research: Natural Architectures for Cognitive Flexibility</strong></p>
<p>This broader NeuroAI vision connects directly to collaborative research efforts my colleagues and I have been pursuing through the Thalamus Conte Center at Princeton. Working alongside talented investigators, we&#8217;ve been studying how thalamic circuits, particularly the mediodorsal thalamus, regulate uncertainty and cognitive flexibility.</p>
<p>The thalamus has long been thought of as a simple relay station, passively transferring information between brain regions. Our work reveals a far more sophisticated picture: the thalamus acts as a regulator of cortical representations, actively regulating the flow of information based on context, confidence, and computational demands.</p>
<p>Recent findings show that the mediodorsal thalamus exhibits distinct coding properties from prefrontal cortex. While prefrontal areas represent information in high-dimensional, mixed formats that can support many different behaviors, the thalamus compresses this information into lower-dimensional representations focused on key contextual variables like task rules and uncertainty estimates.</p>
<p>This architectural arrangement resembles what computational scientists call &#8220;regularization,&#8221; where a system constrains its processing to focus on the most relevant dimensions of a problem. The thalamus appears to provide this kind of regularization to prefrontal networks, helping them avoid getting lost in irrelevant details while maintaining the flexibility to handle novel situations.</p>
<p>This has direct implications for understanding psychiatric disorders. Schizophrenia, for instance, involves difficulties with cognitive flexibility and context processing. Our work suggests that these may reflect specific disruptions in thalamic computation rather than global deficits in learning or reasoning.</p>
<p>Understanding how evolution solved the uncertainty problem in biological brains could be the key to creating computational systems that are truly adaptive and robust in the face of novel situations. Current systems struggle precisely because they lack principled ways to handle uncertainty and adjust their confidence based on context.</p>
<p><strong>The Road Ahead: From Lab to Life</strong></p>
<p>The implications of this NeuroAI approach extend far beyond academic laboratories. The convergence of insights from biological intelligence and computational innovation points toward systems that could:</p>
<p><strong>Adapt like animals</strong>: Robots that learn to navigate new environments with the flexibility of a mouse exploring new territory. Imagine search and rescue robots that can adapt to novel disaster scenarios, or autonomous vehicles that can handle completely unprecedented road conditions by drawing on fundamental principles of navigation and obstacle avoidance rather than memorized patterns.</p>
<p><strong>Learn efficiently</strong>: Systems that acquire new skills from limited examples, like how animals quickly adapt to new food sources or threats. A key insight from biological learning is the importance of strong inductive biases, the built-in assumptions that help guide learning in the right direction. Animals don&#8217;t start from scratch; they leverage millions of years of evolutionary optimization.</p>
<p><strong>Handle uncertainty gracefully</strong>: Systems that know when they don&#8217;t know, actively seeking information to improve their decisions rather than confidently making wrong choices. This requires implementing something like the thalamic uncertainty computation we&#8217;ve been studying, a principled way to calibrate confidence and adjust exploration strategies based on current knowledge state.</p>
<p><strong>Integrate seamlessly</strong>: Computation that works alongside humans as naturally as animals coordinate in flocks or herds. This requires understanding not just individual intelligence but collective intelligence, how multiple agents can share information and coordinate actions without centralized control.</p>
<p>Recent experimental work provides concrete examples of how these principles might be implemented. Researchers at DeepMind have developed systems that can learn to play multiple Atari games using the same general-purpose algorithm, rather than requiring game-specific training. Their success comes from incorporating biological principles like replay (reactivating and reorganizing memories during rest) and curiosity-driven exploration.</p>
<p>Similarly, researchers at OpenAI have shown that large language models can exhibit emergent reasoning capabilities when scaled up, suggesting that some aspects of flexible intelligence might emerge from sufficient computational scale combined with appropriate architectural principles.</p>
<p>But perhaps the most promising developments come from robotics, where researchers are beginning to implement embodied learning principles. Boston Dynamics&#8217; robots can navigate complex terrain and recover from perturbations in ways that would have been impossible just a few years ago. Their success comes from combining traditional control theory with machine learning approaches that can adapt to novel situations.</p>
<p><strong>A New Kind of Intelligence</strong></p>
<p>Building models that can pass the embodied Turing test requires more than tweaking existing algorithms. As Zador and colleagues argue, we need a &#8220;large-scale effort to identify and understand the principles of biological intelligence and abstract those for application in computer and robotic systems.&#8221;</p>
<p>Two key insights emerge from this challenge. First, intelligence isn&#8217;t about building internal representations—it&#8217;s about discovering affordances, the opportunities for action that emerge from the interaction between an agent and its environment. Second, animals don&#8217;t just learn; they develop, with their learning capabilities changing over time. Understanding how biological systems bootstrap from simple reflexes to sophisticated reasoning could transform how we build adaptive computational systems.</p>
<p>The convergence of neuroscience and computation offers concrete opportunities for progress. Animals solve computational problems that current systems struggle with, using principles refined over hundreds of millions of years of evolution. The mouse exploring a maze demonstrates flexible navigation, efficient learning from limited experience, and robust generalization. These capabilities emerge from biological circuits that balance exploration with exploitation, build and update internal maps, and adapt to novel situations.</p>
<p>Progress will require sustained collaboration between neuroscientists, computer scientists, and engineers. The questions are concrete: How do biological systems achieve such efficiency? What computational principles underlie adaptive behavior? How can we implement these in artificial systems?</p>
<p>Want to dive deeper into these ideas? Join us at CNS2025 in Florence, Italy (July 5-9, 2025) for our NeuroAI workshop, where we&#8217;ll explore how the convergence of neuroscience and computation is shaping the future of both fields. More details at cnsorg.org/cns-2025.</p>
<p><strong>References</strong></p>
<p>Zador, A., Escola, S., Richards, B., Ölveczky, B., Bengio, Y., Boahen, K., Botvinick, M., Chklovskii, D., Collins, A., Doya, K., Hassabis, D., Kording, K., Konidaris, G., Marblestone, A., Olshausen, B., Pouget, A., Sejnowski, T., Simoncelli, E., Solla, S., Sussillo, D., Tsao, D., &amp; Tsodyks, M. (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. <em>Nature Communications</em>, 14, 1597. https://doi.org/10.1038/s41467-023-37180-x</p>
<p>Zador, A. (2024). NeuroAI: A field born from the symbiosis between neuroscience and computation. <em>The Transmitter</em>. https://www.thetransmitter.org/neuroai/neuroai-a-field-born-from-the-symbiosis-between-neuroscience-ai/</p>
<p>Rajalingham, R., Sohn, H. &amp; Jazayeri, M. (2025). Dynamic tracking of objects in the macaque dorsomedial frontal cortex. <em>Nature Communications</em>, 16, 346. https://doi.org/10.1038/s41467-024-54688-y</p>
<p>Thalamus Conte Center. (2024). Princeton University. https://conte.thalamus.princeton.edu/</p>
<p>Hubel, D. H., &amp; Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat&#8217;s visual cortex. <em>The Journal of Physiology</em>, 160(1), 106-154.</p>
<p>von Neumann, J. (1945). First Draft of a Report on the EDVAC. University of Pennsylvania.</p>
<p>McCulloch, W. S., &amp; Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. <em>Bulletin of Mathematical Biophysics</em>, 5(4), 115-133.</p>
<p>Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. <em>Psychological Review</em>, 65(6), 386-408.</p>
<p>Hebb, D. O. (1949). <em>The Organization of Behavior: A Neuropsychological Theory</em>. Wiley.</p>
<p>Minsky, M., &amp; Papert, S. (1969). <em>Perceptrons: An Introduction to Computational Geometry</em>. MIT Press.</p>
<p>Moravec, H. (1988). <em>Mind Children: The Future of Robot and Human Intelligence</em>. Harvard University Press.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence</title>
		<link>https://michaelhalassa.net/machines-that-think-like-us/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Wed, 16 Jul 2025 21:07:04 +0000</pubDate>
				<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[Computational Neuroscience]]></category>
		<category><![CDATA[Digital Twins]]></category>
		<category><![CDATA[LLMs]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[Recurrent Neural Networks]]></category>
		<category><![CDATA[Transformers]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=763</guid>

					<description><![CDATA[Michael Halassa discusses recent insights from the NeuroAI workshop at the OCNS meeting in Florence 2025]]></description>
										<content:encoded><![CDATA[<p>On July 9th 2025, Z. Sage Chen (NYU) and I organized &#8220;The BRAIN 2.0 NeuroAI&#8221; workshop at the Organization for Computational Neurosciences (OCNS) annual meeting in Florence. The workshop brought together several scientists working at the intersection between Natural and Artificial Intelligence research. The energy was high and one could feel the enthusiasm in the air: for the first time in human history, we have machines that appear to learn, remember, and make decisions in ways that mirror core aspects of human cognition. This creates an unprecedented opportunity: we can now understand our minds by building artificial systems that think and behave like us.</p>
<p>The workshop conversations were bi-directional. In one direction, people asked: what can the strategies and mechanisms of artificial networks tell us about how we function? In another, we collectively asked: can we leverage what we are constantly learning about neuroscience to build better AI? After all, the energy efficiency and flexibility of animal brains is unmatched by state-of-the-art artificial agents.</p>
<div class="subscription-widget-wrap">
<div class="subscription-widget show-subscribe">
<div class="preamble">
<p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p>
</div>
<div class="subscribe-widget is-signed-up is-fully-subscribed" data-component-name="SubscribeWidget">
<div class="pencraft pc-reset button-wrapper">
<div class="pencraft pc-display-flex pc-justifyContent-center pc-reset"></div>
</div>
</div>
</div>
</div>
<p>This represents a remarkable shift from traditional approaches. Instead of studying brains and machines in isolation, we&#8217;re using them to inform each other. The artificial systems we create serve as hypotheses about how intelligence works, hypotheses we can test, modify, and refine in ways that would be impossible with biological systems alone. Throughout the workshop, a fascinating tension emerged: the most accurate models of neural activity may not be the most interpretable or biologically meaningful ones—a fundamental tradeoff that shapes how we understand minds.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!3V0d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!3V0d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!3V0d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3V0d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919 3696 4ae2 9f60" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3059275,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2555919-3696-4ae2-9f60-9b67c04b785f_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 25"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Sage (left) and I (right)</p>
<h2 class="header-anchor-post">The Mystery of How the Brain Learns</h2>
<p>The development and application of backpropagation to deep networks created watershed moments that contributed to the modern AI era. While backpropagation itself was developed earlier, Geoff Hinton&#8217;s 2006 work on deep belief networks and especially the 2012 AlexNet breakthrough that won the ImageNet competition marked the real turning points. The backpropagation algorithm allows neural networks to learn by computing how errors in behavioral outputs should change weights throughout the network and has shown remarkable learning efficiency. It works by propagating error signals backward through the network, telling each connection exactly how to adjust to reduce mistakes.</p>
<p>Since then, backpropagation has become the backbone of modern deep learning, powering everything from image recognition to language models. But here&#8217;s the puzzle: this remarkable learning efficiency is likely mirrored by the brain, yet we don&#8217;t know what the analogous biological algorithm is. The brain cannot implement standard backpropagation, as it lacks the requisite backward connectivity and global, vectorized error signals that artificial networks rely on.</p>
<p>Traditional Computational Neuroscience has long proposed Hebbian learning (&#8220;cells that fire together, wire together&#8221;) as the brain&#8217;s learning mechanism. While Hebbian learning is biologically plausible and occurs throughout the nervous system, its classical formulations lack the credit assignment specificity of backpropagation. Hebbian learning can strengthen connections between simultaneously active neurons, but it struggles to determine which specific connections are responsible for errors in complex, multilayered networks. This creates a fundamental gap: the brain needs backpropagation-like credit assignment to learn complex behaviors, but it can only implement local plasticity rules.</p>
<p>This gap was a central theme throughout our workshop, with speakers presenting different pieces of what might be a larger puzzle. Rui Ponte Costa&#8217;s work on different learning mechanisms across neural circuits presents a compelling case for partitioning the credit assignment problem across different substrates distributed throughout the brain. For example, the cortex learns via self-supervised learning but can be influenced by fast predictive subcortical machinery to adjust its representations quickly and flexibly. This parallels some of our own work on thalamocortical interactions, including our longstanding collaboration with Sage Chen&#8217;s lab. Gaspard Olivier presented his PhD work with Rafal Bogacz, showcasing the lab&#8217;s work on predictive learning achieving backpropagation-like performance (under certain conditions) using purely local mechanisms. Nao Uchida demonstrated how distributional reinforcement learning (where the brain represents entire probability distributions of future rewards rather than simple averages) could provide another piece of the credit assignment puzzle. The emerging picture suggests that the brain&#8217;s backpropagation parallel is likely a combination of these mechanisms working together, rather than any single biological algorithm.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!1tvn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 424w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 848w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1272w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="figure 1" src="https://substackcdn.com/image/fetch/$s_!1tvn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!1tvn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 424w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 848w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1272w, https://substackcdn.com/image/fetch/$s_!1tvn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png 1456w" alt="figure 1" width="685" height="294" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5bc0a9cc-d5ec-4b89-8f70-9843e3fbce2f_685x294.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:294,&quot;width&quot;:685,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;figure 1&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" /></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>From Lillicrap et al., 2020 Nat Rev Neurosci (Backpropagation and the brain)</p>
<h2 class="header-anchor-post">Rui Costa: Three Pillars of Brain-Like Intelligence</h2>
<p>Rui presented &#8220;three pillars&#8221; of intelligent systems:</p>
<p><strong>1. World Models (Unsupervised Learning)</strong>: The neocortex builds internal models of the world through self-supervised learning. Recent work suggests that local cortical circuits may have evolved specifically to support this kind of learning, with Layer 2/3 neurons predicting future inputs due to processing delays, while Layer 5 neurons integrate predictions from both the thalamus and cortical predictions.</p>
<p><strong>2. Model Fine-tuning (Reinforcement Learning)</strong>: Dopamine adjusts prefrontal cortex activity, thereby fine-tuning the world model that guides learning throughout the brain. This goes beyond classical RL formulations; it&#8217;s a sophisticated meta-learning system.</p>
<p><strong>3. Flexible Behavior</strong>: The cerebellum and hippocampus work together as predictive systems, with the cerebellum providing high-dimensional, fast predictions while the hippocampus offers more compressed, memory-based guidance. Remarkably, this work shows that combining a cerebellum-inspired system with a fixed RNN performs better on zero-shot learning tasks than purely plastic networks. This connects directly to our work: we have built a series of models (Aditya Gilra 2018, Ali Hummos 2022, Wei-Long Zheng 2024) that all rely on a similar mechanism of fixed PFC RNN and a fast subcortical modulator to enable flexibility (and maybe generalization). Importantly, the cerebellum communicates with prefrontal cortex through the mediodorsal thalamus, creating a pathway for rapid, predictive learning that doesn&#8217;t require extensive retraining of cortical circuits.</p>
<p>What makes Costa&#8217;s framework interesting is its grounding in optimization theory. Rather than describing these systems phenomenologically, he&#8217;s showing how they might emerge from the brain&#8217;s need to solve specific computational problems efficiently. His lab has demonstrated that cortical circuits can approximate deep learning algorithms, that the cerebellum enables rapid adaptation through &#8220;feedback decoupling,&#8221; and that cholinergic modulation implements a kind of attention mechanism for learning.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!vVtH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 26" src="https://substackcdn.com/image/fetch/$s_!vVtH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!vVtH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vVtH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F78baa7d1 4fde 4f9b 8ed3" width="1456" height="975" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/78baa7d1-4fde-4f9b-8ed3-4fa6d3edd61c_3509x2349.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:975,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1484751,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec01089e-a0d3-4d3a-84e8-0f323776ca15_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Rui Ponte Costa (Oxford)</p>
<h2 class="header-anchor-post">Tatiana Engel: Digital Twins and Latent Circuit Models</h2>
<p>Tatiana Engel, from Princeton&#8217;s Neuroscience Institute, presented groundbreaking work that spans two critical areas: the challenges of neural &#8220;digital twins&#8221; and her innovative latent circuit model approach.</p>
<p>Her work on &#8220;digital twins&#8221; (RNNs trained to reproduce neural population dynamics) revealed a fundamental limitation in how we think about brain models. When Engel&#8217;s team trained RNNs to match neural activity patterns, they found that these &#8220;twin&#8221; networks could reproduce the data beautifully. But when they tried to use these twins to predict the effects of neural perturbations, the results were not awesome, to put it mildly. Different twin networks that matched the same data equally well made completely different predictions about how the brain would respond to interventions.</p>
<p>This failure isn&#8217;t just a technical problem. It reveals something deep about the nature of biological intelligence. The brain operates in a low-dimensional space of meaningful solutions, while artificial networks explore the full high-dimensional space of possible solutions. Even when they converge on the same behavior, they&#8217;re often implementing completely different computational strategies.</p>
<p>Engel&#8217;s solution is elegant: instead of training twins to match all neural activity, train them to capture the essential low-dimensional structure that actually matters for computation. This &#8220;latent circuit&#8221; approach trades some descriptive accuracy for genuine predictive power. Her latent circuit model is a dimensionality reduction approach in which task variables interact via low-dimensional recurrent connectivity to produce behavioral output. Unlike traditional correlation-based dimensionality reduction methods, the latent circuit model incorporates recurrent interactions among task variables to implement the computations necessary to solve the task.</p>
<p>Crucially, Engel demonstrated that when you constrain RNNs to have fewer neurons, forcing them into lower-dimensional regimes, something remarkable happens: they begin to show more structured, interpretable behavior. However, this improvement in interpretability and biological plausibility comes with a tradeoff—there&#8217;s a reduction in their ability to perfectly match the complex, high-dimensional neural activity patterns. This finding highlights a fundamental tension in computational neuroscience: the most accurate models of neural activity may not be the most interpretable or biologically meaningful ones.</p>
<p>When applied to recurrent neural networks trained on context-dependent decision-making tasks, her latent circuit model revealed a suppression mechanism in which contextual representations inhibit irrelevant sensory responses. Most remarkably, when she applied the same method to prefrontal cortex recordings from monkeys performing the same task, they found similar suppression of irrelevant sensory responses—contrasting sharply with previous analyses using correlation-based methods that had found no such suppression.</p>
<p>The key insight is that dimensionality reduction methods that do not incorporate causal interactions among task variables are biased toward uncovering behaviorally irrelevant representations. Engel&#8217;s work demonstrates that incorporating the recurrent interactions that implement task computations is essential for identifying the neural mechanisms that actually drive behavior.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!Cwf7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 27" src="https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!Cwf7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Cwf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F63d85df3 c570 4825 8263" width="1456" height="931" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63d85df3-c570-4825-8263-39a6885b449a_3778x2415.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:931,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1929120,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd84a89f3-4e86-428c-a6b7-b414bbb88a2c_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Tatiana Engel (Princeton)</p>
<h2 class="header-anchor-post">Miller&#8217;s Hybrid Models: Bridging AI and Classical Cognition</h2>
<p>Kevin Miller from DeepMind&#8217;s Neuroscience Lab presented work on hybrid neural-cognitive models that bridge classical cognitive frameworks with modern machine learning. Miller&#8217;s approach combines the interpretability of traditional cognitive models with the predictive power of neural networks, creating systems that can both explain and predict behavior.</p>
<p>His work addresses a fundamental challenge in computational cognitive science: classical cognitive models are interpretable but often limited in their predictive accuracy, while neural networks can achieve high performance but remain black boxes. Miller&#8217;s hybrid RNNs and disentangled architectures attempt to get the best of both worlds, maintaining the transparency needed for scientific understanding while achieving the performance necessary for practical applications.</p>
<p>The implications extend beyond just better models. As Miller noted, there may be an inherent tension between complexity and interpretability that reflects something fundamental about how we communicate and reason about intelligent systems. This connects to broader questions about whether the most accurate models of cognition are necessarily the ones we can understand and explain to others.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!zwJ3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!zwJ3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zwJ3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1b6aa09 1b92 4b6a 984b" width="3965" height="2510" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c1b6aa09-1b92-4b6a-984b-48384d742555_3965x2510.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2510,&quot;width&quot;:3965,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1977782,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68973274-5d15-4141-803c-549a6d4ebe5c_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 28"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Kevin Miller (Google Deep Mind)</p>
<h2 class="header-anchor-post">Sen Song: Hierarchical Reasoning Models</h2>
<p>Sen Song presented compelling work on the Sapient project&#8217;s Hierarchical Reasoning Model (HRM), a novel recurrent architecture that challenges conventional approaches to AI reasoning. What makes this work particularly compelling is its demonstration that recurrent transformers can achieve sophisticated reasoning capabilities that current large language models struggle with.</p>
<p>The HRM operates through two interdependent recurrent modules: a high-level module responsible for slow, abstract planning, and a low-level module handling rapid, detailed computations. This architecture is directly inspired by hierarchical processing in the brain, where different cortical areas operate at distinct timescales (slow theta waves for high-level planning and fast gamma waves for detailed processing).</p>
<p>What&#8217;s remarkable is the efficiency: with only about 1000 training examples, the HRM (~27M parameters) outperforms much larger Chain-of-Thought models on challenging benchmarks like the Abstraction and Reasoning Corpus (ARC), Sudoku-Extreme, and complex maze navigation tasks. The model solves these tasks directly from inputs without requiring explicit chain-of-thought supervision.</p>
<p>This work suggests something profound about the future of AI architecture. By introducing recurrence back into transformers, we might finally achieve what&#8217;s been missing in current LLMs: spontaneous activity and genuine thought-like processes. As Song noted, the recurrent dynamics could enable the kind of internal mental simulation that characterizes real reasoning, rather than just sophisticated pattern matching.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!6mf7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!6mf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!6mf7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6mf7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29 dfaa 47d3 b725" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2461162,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c099b29-dfaa-47d3-b725-73ecbbf4bceb_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 29"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Sen Song (Tsighua Univerity) presenting Sapient</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!JywD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JywD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 30" src="https://substackcdn.com/image/fetch/$s_!JywD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!JywD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JywD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JywD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9 001a 4530 9061" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3261676,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37acbcb9-001a-4530-9061-46894a6fa444_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Panel Discussion 1 (left to right: Song, Engel, Ponte Costa, Miller)</p>
<h2 class="header-anchor-post">Dan Levenstein: NeuroAI as Theory Development</h2>
<p>Dan Levenstein&#8217;s work may be among the clearest examples of how AI modeling can generate new theories about core brain functions like memory and planning. Dan presented his work with Blake Richards and Adrien Peyrache, which represents concrete progress in understanding how biological neural networks might actually implement sophisticated learning algorithms. Most significant is Dan&#8217;s recent bioRxiv paper with Peyrache and Richards on &#8220;Sequential predictive learning is a unifying theory for hippocampal representation and replay&#8221;. This work addresses one of the most fundamental questions in neuroscience: how does the hippocampus both form cognitive maps and generate the offline &#8220;replay&#8221; sequences that support memory consolidation and planning?</p>
<p>The breakthrough comes from training recurrent neural networks to predict egocentric sensory inputs as an agent moves through simulated environments. Levenstein and colleagues found that spatially tuned cells emerge from all forms of predictive learning, but offline replay only emerges when networks use recurrent connections and head-direction information to predict multi-step observation sequences. This promotes the formation of a continuous attractor that reflects the geometry of the environment (essentially a neural cognitive map).</p>
<p>What&#8217;s remarkable is that these offline trajectories showed wake-like statistics, autonomously replayed recently experienced locations, and could be directed by a virtual head direction signal. Networks trained to make cyclical predictions of future observation sequences were able to rapidly learn cognitive maps and produced sweeping representations of future positions reminiscent of hippocampal theta sweeps.</p>
<p>This work suggests that hippocampal theta sequences reflect a circuit that implements a data-efficient algorithm for sequential predictive learning. The framework provides a unifying theory that connects spatial representation, memory replay, and theta sequences under a single computational principle: the brain&#8217;s drive to predict future sensory experiences.</p>
<p>What makes this work particularly significant is that it doesn&#8217;t require exotic new mechanisms. It leverages well-known properties of dendrites, synapses, and synaptic plasticity that already exist in cortical circuits. The burst-dependent plasticity rule essentially allows the brain to implement a form of top-down credit assignment that rivals artificial backpropagation algorithms, but using purely local, biologically plausible mechanisms.</p>
<p>This represents exactly the kind of theory development that NeuroAI enables: using optimization principles and machine learning insights to understand how evolution might have solved fundamental computational problems in neural circuits.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!VPBj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 31" src="https://substackcdn.com/image/fetch/$s_!VPBj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!VPBj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VPBj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5 4207 4563 b6b6" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1756874,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95916ac5-4207-4563-b6b6-39bf982837a5_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Couldn’t find a picture of Dan— but this slide easily wins the coolest slide award of the workshop (Dan is starting his lab at Yale in August!)</p>
<h2 class="header-anchor-post">Gaspard Olivier: Predictive Learning Frameworks</h2>
<p>Gaspard Olivier presented work from Rafal Bogacz&#8217;s recent Nature Neuroscience paper &#8220;Inferring neural activity before plasticity as a foundation for learning beyond backpropagation.&#8221; Olivier&#8217;s approach centers on the concept of energy machines (physical mechanical analogies that provide an intuitive understanding of how energy-based networks achieve sophisticated learning).</p>
<p>The key insight from Bogacz&#8217;s work, which Olivier built upon, is the principle of &#8220;prospective configuration.&#8221; Unlike backpropagation, which modifies weights first and then observes the resulting change in neural activity, prospective configuration works in reverse: neural activity changes first to match the desired output, and then synaptic weights are modified to consolidate this prospective activity pattern.</p>
<p>Olivier&#8217;s energy machine framework visualizes this process elegantly. In these mechanical systems, neural activity corresponds to the vertical position of nodes sliding on posts, synaptic connections correspond to rods connecting the nodes, and the energy function corresponds to the elastic potential energy of springs. When the system &#8220;relaxes&#8221; by minimizing energy, it naturally settles into the prospective configuration (the neural activity pattern that the network should produce after learning).</p>
<p>This framework solves a fundamental problem in biological learning: how to implement credit assignment without the precise backward information flow that backpropagation requires. As Olivier demonstrated, the relaxation process in energy-based networks inherently &#8220;foresees&#8221; the effects of potential weight changes and compensates for them dynamically, avoiding the catastrophic interference that plagues backpropagation.</p>
<p>The practical implications are profound. Olivier&#8217;s work shows that energy-based learning can outperform backpropagation in biologically relevant scenarios like online learning, continual learning across multiple tasks, and learning with limited data (precisely the challenges that biological systems face). The energy machine framework provides both the theoretical foundation and the intuitive understanding for why evolution might have favored such learning mechanisms over more direct optimization approaches.</p>
<p>This represents a crucial piece of the credit assignment puzzle, demonstrating how the brain might implement sophisticated learning algorithms through local, energy-based computations that are both biologically plausible and computationally superior to artificial alternatives.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!DBon!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 424w, https://substackcdn.com/image/fetch/$s_!DBon!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 848w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1272w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" title="Inferring neural activity before plasticity as a foundation for learning beyond backpropagation | Nature Neuroscience" src="https://substackcdn.com/image/fetch/$s_!DBon!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!DBon!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 424w, https://substackcdn.com/image/fetch/$s_!DBon!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 848w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1272w, https://substackcdn.com/image/fetch/$s_!DBon!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png 1456w" alt="Inferring neural activity before plasticity as a foundation for learning beyond backpropagation | Nature Neuroscience" width="1456" height="631" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5465ec7a-4b14-4f25-9e02-fe273eba180c_2168x939.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:631,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Inferring neural activity before plasticity as a foundation for learning beyond backpropagation | Nature Neuroscience&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" /></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Couldn’t find a picture of Gaspard— so this is the cool figure from the paper and the framework he discussed</p>
<h2 class="header-anchor-post">Nao Uchida: Distributional Reinforcement Learning</h2>
<p>Nao Uchida&#8217;s presentation began with a fascinating origin story about how DeepMind reached out to him following the groundbreaking success of Deep Q-Networks (DQN). The 2015 Nature paper &#8220;Human-level control through deep reinforcement learning&#8221; had demonstrated that artificial agents could learn to play Atari games directly from pixel inputs, achieving superhuman performance across dozens of games. This breakthrough sparked intense interest in understanding whether similar computational principles might be operating in biological brains.</p>
<p>DeepMind&#8217;s collaboration with Uchida led to the landmark 2020 Nature paper &#8220;A distributional code for value in dopamine-based reinforcement learning&#8221; by Dabney, Kurth-Nelson, Uchida, and colleagues. This work revolutionized our understanding of how the brain represents value and reward. Rather than encoding just the mean expected reward (as traditional reinforcement learning theory suggested), Uchida&#8217;s team discovered that dopamine neurons encode entire probability distributions of future rewards.</p>
<p>The key insight was that different dopamine neurons have different &#8220;expectile codes&#8221;: some neurons are optimistic (responding more to positive prediction errors), others are pessimistic (responding more to negative prediction errors), and still others fall somewhere in between. This diversity in dopamine neuron responses, which had long puzzled neuroscientists, suddenly made computational sense: the brain wasn&#8217;t just learning average rewards, but was representing the full uncertainty and variability of future outcomes.</p>
<p>This distributional approach explains why dopamine neurons show such heterogeneous responses to the same stimuli. Rather than being noise or biological messiness, this diversity serves a crucial computational function. It allows the brain to represent not just &#8220;how much reward am I likely to get?&#8221; but &#8220;what&#8217;s the full range of possible rewards, and how likely is each outcome?&#8221;</p>
<p>Uchida then pivoted to discuss his most recent Nature paper with Alexandre Pouget: &#8220;Multi-timescale reinforcement learning in the brain.&#8221; This work revealed another fundamental aspect of dopamine diversity: different dopamine neurons operate with different discount factors from temporal difference (TD) learning algorithms. Some neurons focus on immediate rewards (high discount factors), while others consider longer-term consequences (low discount factors).</p>
<p>This discovery provides a neurobiological foundation for why humans and animals can balance immediate gratification with long-term planning. Instead of having a single, universal discount factor, the brain maintains a population of neurons with different temporal horizons, allowing for more flexible and adaptive decision-making.</p>
<p>What makes Uchida&#8217;s work particularly compelling in the NeuroAI context is that it represents systems neuroscience at its most computationally rigorous. Rather than simply describing what neurons do, his approach tests specific algorithmic hypotheses about how neural circuits implement learning and decision-making. This is precisely what systems neuroscience in the AI age should be: using computational theories to generate testable predictions about neural mechanisms, then using the results to refine both our understanding of the brain and our artificial intelligence algorithms.</p>
<p>The broader implications are profound: if the brain implements distributional reinforcement learning with multiple timescales, this suggests that current AI systems (which typically use single discount factors and mean-based value representations) are missing crucial computational advantages that biological systems have evolved. Understanding these biological algorithms could lead to more robust, adaptive, and efficient artificial intelligence systems.</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!6x9k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!6x9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!6x9k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6x9k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7 0788 4817 97df" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2720742,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0119bf7-0788-4817-97df-6f3bf8a0ef0a_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 32"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Nao Uchida (Harvard)</p>
<h2 class="header-anchor-post">Looking Forward: Intelligence as Optimization</h2>
<p>What made Florence special wasn&#8217;t just the individual insights, but their convergence around a central theme: machines are becoming our best models for understanding minds not because they copy neural architecture, but because they solve the same optimization problems that evolution has been working on for millions of years.</p>
<p>Perhaps the most profound insight was recognizing a fundamental trade-off that shapes both artificial intelligence and human cognition: the tension between predictive power and interpretability. Recent work has shown that tiny recurrent neural networks, sometimes with just 1-4 units, can outperform much larger networks at predicting animal behavior. Yet the models that best predict behavior are often the hardest to understand, while the models we can understand often predict behavior poorly.</p>
<p>This &#8220;interpretability paradox&#8221; might be fundamental to how minds work. When you explain your decision-making process to a friend, you&#8217;re not giving them access to your neural network weights. You&#8217;re constructing a simplified, interpretable model that captures the essential logic while losing the messy details. Evolution may have equipped us with simple, communicable heuristics precisely because they&#8217;re interpretable, even though more complex processes actually drive our behavior.</p>
<p>Whether it&#8217;s Uchida&#8217;s work revealing how dopamine neurons encode probability distributions of rewards, Dan&#8217;s demonstrations that predictive learning can unify spatial representation and replay, or Costa&#8217;s three-pillar architecture showing how world models emerge from optimization principles, the common thread is that intelligence arises from solving computational problems efficiently under biological constraints.</p>
<p>This represents a profound shift in how we think about the relationship between brains and machines. We&#8217;re not trying to build artificial brains. We&#8217;re trying to understand the computational principles that both biological and artificial systems must discover to be intelligent. Instead of asking &#8220;How does the brain work?&#8221; researchers are asking &#8220;What computational problems does intelligence solve, and what are the optimal solutions?&#8221;</p>
<p>The implications extend beyond academic understanding. If biological systems have evolved superior learning algorithms like distributional reinforcement learning, prospective configuration, or hierarchical reasoning models, then incorporating these insights could lead to more sample-efficient, robust, and adaptable artificial intelligence systems.</p>
<p>As the workshop concluded, the participants seemed to recognize they were grappling with fundamental questions about intelligence that will likely shape the next decade of research. The conversation has only just begun, but the direction is becoming clearer: understanding intelligence requires understanding the optimization problems that both biological and artificial systems must solve.</p>
<p>Organizing this workshop with Sage was one of the most intellectually energizing experiences I&#8217;ve had. It reinforced the idea that if we want to understand intelligence—biological or artificial—we need both neurons and networks, both brains and machines.</p>
<p>&nbsp;</p>
<div class="captioned-image-container">
<figure>
<div class="image2-inset">
<picture><source srcset="https://substackcdn.com/image/fetch/$s_!P2Zb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1456w" type="image/webp" sizes="100vw" /><img loading="lazy" decoding="async" class="sizing-normal" src="https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg" sizes="100vw" srcset="https://substackcdn.com/image/fetch/$s_!P2Zb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!P2Zb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg 1456w" alt="https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e fc95 497d 9d00" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3144041,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://michaelhalassa.substack.com/i/168085723?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d5aa56e-fc95-497d-9d00-58ad3a12ad11_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" title="Machines That Think Like Us: Insights from the NeuroAI Workshop in Florence 33"></picture>
<div class="image-link-expand">
<div class="pencraft pc-display-flex pc-gap-8 pc-reset">
<div class="pencraft pc-reset icon-container view-image"></div>
</div>
</div>
</div>
</figure>
</div>
<p>Panel discussion 2 (Left to right: honorary panelist Ken Miller (Columbia), Kevin Miller (Deep Mind), Gaspard Oliviers (Oxford), Dan Levenstein (Montreal—&gt;Yale), Nao Uchida (Harvard), Sen Song (Tsinghua).</p>
<div>
<hr />
</div>
<p><em>The OCNS 2025 NeuroAI workshop took place July 9th, 2025 in Florence, Italy. The insights presented here represent the collective wisdom of dozens of researchers pushing the boundaries of our understanding of intelligence.</em></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>When the Brain&#8217;s Uncertainty computer goes offline: New Human Evidence for Thalamic Regulation of Decision-Making</title>
		<link>https://michaelhalassa.net/thalamus-uncertainty-decisions/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Fri, 11 Jul 2025 04:32:22 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Cognitive Processing]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Michael Halassa]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[Prefrontal cortex]]></category>
		<category><![CDATA[Thalamocortical circuits]]></category>
		<category><![CDATA[Halassa]]></category>
		<category><![CDATA[MD thalamus]]></category>
		<category><![CDATA[Thalamocortical]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=754</guid>

					<description><![CDATA[New research reveals how the mediodorsal thalamus regulates decision-making confidence and exploration behavior in humans. Michael Halassa discusses breakthrough findings from focused ultrasound studies showing thalamic control of belief updating and cognitive flexibility.]]></description>
										<content:encoded><![CDATA[<h2>An Elegant Natural Experiment</h2>
<p>The study by Mackenzie et al. (2025, bioRxiv) represents a particularly clever approach to understanding human thalamic function. Rather than relying on correlational neuroimaging, the researchers capitalized on an unintended consequence of focused ultrasound thalamotomy for essential tremor. When post-surgical vasogenic edema extended beyond the intended motor target into cognitive thalamic regions, it created a rare opportunity to assess the causal contribution of different thalamic nuclei to decision-making behavior.</p>
<p>What makes this approach so powerful is the precision it affords. Patients served as their own controls, tested before and after surgery on a sophisticated decision-making paradigm that probes the exploration-exploitation trade-off under uncertainty.</p>
<h2>Computational Dissection of Behavioral Changes</h2>
<p>Using the restless four-armed bandit task, which requires continuous adaptation to changing reward contingencies, the researchers could probe multiple aspects of decision-making simultaneously. The task&#8217;s Gaussian random walk structure creates ongoing uncertainty, forcing participants to balance between exploiting currently favored options and exploring alternatives that might yield better outcomes.</p>
<p>The key innovation came from their computational modeling approach. Rather than simply observing that patients made more &#8220;stay&#8221; choices post-surgery, the authors fitted multiple reinforcement learning model variants to decompose the underlying decision processes. This revealed that the behavioral shift was best captured by a Bayesian learning model with increased reward sensitivity (β) but eliminated exploration bonus—suggesting that patients weren&#8217;t simply perseverating, but had fundamentally altered confidence in their value estimates.</p>
<p>Most strikingly, when using their winning model to classify choice types, the researchers found a dramatic reduction in <strong>directed exploration</strong>—the strategic sampling of uncertain options to gain information (Mackenzie et al., 2025). This wasn&#8217;t random exploration or simple indecision, but the specific loss of information-seeking behavior that would normally help resolve uncertainty about option values.</p>
<h2>Anatomical Precision and Circuit Specificity</h2>
<p>The neuroimaging analysis provided crucial anatomical specificity. The degree of behavioral change correlated specifically with edema extension into the <strong>lateral (parvocellular) mediodorsal nucleus</strong>—not other thalamic regions including the intended surgical target (VIM). This specificity is important given the known functional subdivisions within MD:</p>
<ul>
<li><strong>Lateral MD → DLPFC/Frontal Pole</strong>: Dense reciprocal connectivity with areas involved in cognitive control and abstract rule formation</li>
<li><strong>Medial MD → OFC/vmPFC</strong>: Connections with valuation and reward-processing regions</li>
</ul>
<p>The functional connectivity analysis further supported this anatomical specificity. Using normative connectome data, individual patients&#8217; behavioral changes could be predicted from the connectivity profile between their lesioned MD voxels and prefrontal cortex—but only for MD, not other thalamic nuclei.</p>
<p>&nbsp;</p>
<h2>Mechanistic Insights: From Confidence Calibration to Circuit Function</h2>
<p>The computational framework reveals something more nuanced than simple &#8220;inflexibility.&#8221; The post-lesion behavioral profile suggests a specific breakdown in <strong>uncertainty representation</strong>—what we might call miscalibrated confidence. When MD-PFC communication is compromised, the system appears to default to high confidence in existing value representations, reducing sensitivity to contradictory information.</p>
<p>This aligns with emerging theoretical frameworks positioning the MD thalamus as a critical node in hierarchical inference, helping to coordinate distributed computations for flexible and efficient learning (Scott et al., 2024). The loss of directed exploration particularly supports this view, as this behavior specifically emerges when agents are uncertain about their value estimates and seek information to reduce that uncertainty.</p>
<h2>Bridging Animal Models and Human Neuroscience</h2>
<p>The convergence with our rodent findings is striking:</p>
<p><strong>Animal Studies (MD inactivation/optogenetics)</strong>:</p>
<ul>
<li>Reduced flexibility in volatile environments</li>
<li>Animals fail to revise beliefs when contingencies change</li>
<li>Inflated certainty in action values</li>
<li>Deficit specific to belief updating, not initial learning</li>
</ul>
<p><strong>Human Study (accidental MD lesions)</strong>:</p>
<ul>
<li>Reduced switching in uncertain environments</li>
<li>Patients fail to explore when exploration would be beneficial</li>
<li>Increased confidence in value estimates</li>
<li>Preserved basic learning ability</li>
</ul>
<p>This cross-species convergence suggests we&#8217;ve identified a fundamental computational principle rather than a species-specific curiosity.</p>
<h2>Therapeutic Implications: Beyond Motor Applications</h2>
<p>The findings suggest intriguing therapeutic possibilities, particularly for disorders characterized by altered belief updating and confidence calibration. The demonstration that MD disruption leads to overconfident exploitation with reduced information-seeking offers a compelling framework for understanding psychiatric conditions where belief revision goes awry.</p>
<p>Consider schizophrenia, where patients often exhibit <strong>pathological certainty</strong> in delusional beliefs despite contradictory evidence. The current findings suggest a potential mechanism: if MD-PFC circuits that normally regulate confidence in beliefs become dysregulated, patients might lose the capacity for adaptive doubt that would otherwise prompt belief revision. The specific loss of directed exploration observed here—the strategic sampling of information to resolve uncertainty—parallels the clinical observation that individuals with psychosis often fail to seek disconfirming evidence for their beliefs.</p>
<p>This connects to broader hypotheses about uncertainty processing in cognitive control. Rather than viewing delusions simply as &#8220;false beliefs,&#8221; they might reflect a fundamental breakdown in the brain&#8217;s ability to appropriately weight confidence in its own predictions. When the system becomes overconfident in existing representations (as seen post-thalamotomy), it loses the motivation to gather information that might challenge those representations—a hallmark of delusional thinking.</p>
<h2>Broader Significance: Rethinking Thalamic Function</h2>
<p>This work contributes to a fundamental reconceptualization of thalamic function &#8211; from simple relay station to active computational processor. The thalamus isn&#8217;t just routing information; it&#8217;s dynamically modulating the confidence and precision of cortical computations based on behavioral context.</p>
<h2>Personal Reflection: When Theory Meets Unexpected Validation</h2>
<p>For our lab, this study is quite awesome &#8211; when years of circuit-level investigation receive independent validation from an entirely different methodology. The fact that this confirmation came through a clinical study aimed at treating human suffering makes it even more meaningful.</p>
<p>It&#8217;s the rare convergence where theory and evidence transform each other: the theory gains human causal validation, while the evidence gains mechanistic understanding. Together, they point toward a future where we might not just understand the circuits of adaptive decision-making, but actively repair them when they break.</p>
<p>The patients in this study, seeking relief from debilitating tremor, graciously contributed to our understanding of one of the brain&#8217;s most fundamental computations: how to balance confidence with curiosity. Their experience shows us what happens when certainty becomes a cage &#8211; when we lose the capacity to doubt ourselves when doubt would serve us best.</p>
<p><em>The work discussed builds on extensive research into thalamocortical circuits and decision-making, offering new insights into the neural mechanisms underlying adaptive behavior and potential therapeutic applications for disorders of motivation and cognitive flexibility.</em></p>
<p><strong>References:</strong></p>
<ul>
<li>Mackenzie, G., et al. (2025). Focused ultrasound neuromodulation of mediodorsal thalamus disrupts decision flexibility during reward learning. bioRxiv.</li>
<li>Scott, D.N., Mukherjee, A., Nassar, M.R., &amp; Halassa, M.M. (2024). Thalamocortical architectures for flexible cognition and efficient learning. Trends in Cognitive Sciences, 28(7), 639-652.</li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>PAPER ALERT: A New Model for Mediodorsal-Prefrontal Interactions</title>
		<link>https://michaelhalassa.net/paper-alert-a-new-model-for-mediodorsal-prefrontal-interactions/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Sun, 13 Apr 2025 16:23:54 +0000</pubDate>
				<category><![CDATA[Cognitive flexibility]]></category>
		<category><![CDATA[Computational neuroscience]]></category>
		<category><![CDATA[Halassa Lab]]></category>
		<category><![CDATA[Mediodorsal thalamus]]></category>
		<category><![CDATA[Neural circuits]]></category>
		<category><![CDATA[NeuroAI]]></category>
		<category><![CDATA[Prefrontal cortex]]></category>
		<category><![CDATA[Schizophrenia research]]></category>
		<category><![CDATA[Thalamocortical circuits]]></category>
		<category><![CDATA[Working memory]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=749</guid>

					<description><![CDATA[New Nature Computational Model reveals how the mediodorsal thalamus gates prefrontal cortex signals. Validated by Halassa Lab data, this advances schizophrenia and cognitive flexibility research.]]></description>
										<content:encoded><![CDATA[<p style="font-weight: 400;">In our ongoing quest to understand how the brain enables flexible cognition, the mediodorsal (MD) thalamus and its dialogue with the prefrontal cortex (PFC) have emerged as central players. Following a series of modeling papers from our lab—including Wei-Long Zheng’s recent <em>Nature Communications</em> work on thalamocortical inference—we now have another exciting advance to share. A new study led by <strong>Sage Chen’s lab at NYU</strong> and published in <em>Nature Communications</em> proposes a <strong>computational model of MD-PFC interactions</strong>, offering fresh insights into how these circuits support adaptive decision-making.</p>
<p style="font-weight: 400;">This collaborative work is driven by a burning question we have: <em>Why is the brain wired this way?</em> Why does the thalamus, nestled deep in the forebrain and reciprocally connected to cortex, play such a critical role in cognition? Our empirical work over the past decade has dissected thalamocortical dynamics in behaving animals, and our computational work including critical collaborations have helped us formalize these findings into testable frameworks. Sage’s new paper is a natural extension of this synergy—and with empirical support from our lab (spearheaded by <strong>postdoc Arghya Mukherjee</strong>), it opens new doors for exploration.</p>
<h2 style="font-weight: 400;"><strong>Key Advances in the New Model</strong></h2>
<ol style="font-weight: 400;">
<li><strong>The MD Thalamus as a Dynamic Router</strong><br />
The study presents the MD thalamus not just as a passive relay, but as an <strong>active switchboard</strong> that flexibly gates information to the PFC based on task demands. This aligns with our lab’s empirical observations that thalamic neurons selectively amplify sensory inputs or internal signals depending on behavioral context.</li>
<li><strong>Task-Dependent Cortical Prioritization</strong><br />
The model captures how the MD thalamus <strong>biases PFC representations</strong>—for example, emphasizing sensory cues during perceptual decisions versus memory traces during recall. This mirrors findings from our 2018 (<em>Rikhye, Gilra &amp; Halassa</em>) and 2022 (<em>Hummos et al.</em>) models, where thalamic input helped partition PFC activity to avoid interference across tasks.</li>
<li><strong>Bridging Theory and Experiment</strong><br />
Crucially, the model’s predictions were tested with <em>in vivo</em> data from our lab, reinforcing its biological plausibility. This back-and-forth between modeling and physiology is a hallmark of our approach, exemplified in Wei-Long Zheng’s 2024 study, where a thalamocortical RNN outperformed standard models in rapid inference tasks.</li>
</ol>
<h2 style="font-weight: 400;"><strong>How This Drives Our Empirical Work Forward</strong></h2>
<ol style="font-weight: 400;">
<li><strong>New Experiments to Test Gating Mechanisms</strong><br />
The model proposes specific thalamocortical connectivity rules for information routing. We’re now designing experiments to probe these mechanisms using <strong>optogenetics, electrophysiology, and imaging</strong>—asking how MD neurons dynamically recruit PFC microcircuits during task switching.</li>
<li><strong>Linking to Schizophrenia-Relevant Dysfunction</strong><br />
Disrupted thalamocortical gating is implicated in schizophrenia. By refining Sage’s model with disease-relevant perturbations (e.g., thalamic silencing), we aim to pinpoint how maladaptive routing contributes to cognitive inflexibility.</li>
<li><strong>The Next Generation of NeuroAI Models</strong><br />
Just as Wei-Long’s hybrid RNN incorporated biological constraints (e.g., thalamic reticular inhibition), future iterations of Sage’s model could integrate our latest empirical data—creating a virtuous cycle between theory and experiment.</li>
</ol>
<h2 style="font-weight: 400;"><strong>The Bigger Picture: A Decade of Thalamocortical ModelingCognitive flexibility</strong></h2>
<p style="font-weight: 400;">This paper is the latest in a line of collaborative efforts to formalize MD-PFC interactions:</p>
<ul style="font-weight: 400;">
<li><strong>Rikhye, Gilra &amp; Halassa (2018)</strong>: Showed thalamus mitigates &#8220;catastrophic forgetting&#8221; in PFC.</li>
<li><strong>Hummos et al. (2022)</strong>: Derived a cortico-thalamic learning rule that compresses task context.</li>
<li><strong>Zheng et al. (2024)</strong>: Demonstrated thalamus enables rapid inference and multi-task performance.</li>
<li><strong>Zhang, X. et al. (2025):</strong> extends this to hierarchical reasoning and handling multiple forms of uncertainty.</li>
</ul>
<p style="font-weight: 400;">Together, these studies underscore the thalamus’s role as a <strong>locus of cognitive flexibility</strong>—a theme Sage’s work now extends with elegant mechanistic detail.</p>
<p style="font-weight: 400;"><strong>Looking Ahead</strong></p>
<p style="font-weight: 400;">As NeuroAI gains momentum (evidenced by the 2024 Physics Nobel for foundational neural network work), our lab remains committed to <strong>grounding computational advances in biological reality</strong>. Sage’s model not only validates our empirical findings but also charts a course for future work—one where theory and experiment co-evolve to unravel the thalamus’s secrets.</p>
<p style="font-weight: 400;">For those who missed it, revisit our blog on Wei-Long Zheng’s paper here, and stay tuned as we put these models to the test!</p>
<p style="font-weight: 400;">Reference:</p>
<p style="font-weight: 400;">Zhang, X., Mukherjee, A., Halassa, M. M., &amp; Chen, Z. S. (2025). Mediodorsal thalamus regulates task uncertainty to enable cognitive flexibility. Nature communications, 16(1), 2640. <a href="https://doi.org/10.1038/s41467-025-58011-1" target="_blank" rel="noopener">https://doi.org/10.1038/s41467-025-58011-1</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Reflecting on the 13th Annual Tufts Neuroscience Symposium</title>
		<link>https://michaelhalassa.net/reflecting-on-the-13th-annual-tufts-neuroscience-symposium/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Thu, 06 Mar 2025 18:55:20 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=742</guid>

					<description><![CDATA[This past November, I had the privilege of directing the 13th Annual Tufts Neuroscience Symposium—a day filled with inspiring talks, lively discussions, and deep engagement across the neuroscience community. This year’s symposium centered around Systems, Computational, and Cognitive Neuroscience, featuring an exceptional lineup of speakers who brought diverse perspectives to our understanding of brain function. [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>This past November, I had the privilege of directing the 13th Annual Tufts Neuroscience Symposium—a day filled with inspiring talks, lively discussions, and deep engagement across the neuroscience community. This year’s symposium centered around <strong>Systems, Computational, and Cognitive Neuroscience</strong>, featuring an exceptional lineup of speakers who brought diverse perspectives to our understanding of brain function.</p>
<h2>A Day of Insightful Talks</h2>
<p>The symposium kicked off with <strong>Nao Uchida (Harvard)</strong> delivering a thought-provoking keynote on the role of dopamine in reinforcement learning. His talk shed light on <strong>circuit motifs underlying reward prediction errors</strong>, proposing a mechanism involving feedback and sign reversal of ventral striatal input to midbrain dopamine neurons. This framework offers a compelling way to think about how the brain computes reward-related signals.</p>
<p>Following Uchida, <strong>John Murray (Dartmouth)</strong> introduced the concept of <strong>task generalization through neural kernels</strong>—a powerful approach to understanding common frameworks for behavioral and neural generalization across humans and artificial intelligence models. His talk highlighted how computational methods can bridge gaps in our understanding of cognition.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-large wp-image-744" src="https://michaelhalassa.net/wp-content/uploads/michaelhalassa-net/sites/334/2025/03/467404938_1177500870494966_435017884391614n-768x1024.jpg" alt="13th Annual Tufts Neuroscience Symposium - Michael Halassa" width="768" height="1024" title="Reflecting on the 13th Annual Tufts Neuroscience Symposium 35"></p>
<p><strong>Shantanu Jadhav (Brandeis)</strong> then took us on a journey into <strong>hippocampal-prefrontal interactions in spatial learning and generalization</strong>. He presented compelling evidence that <strong>while frontal cortex representations generalize across tasks, the hippocampus maintains environment-specific maps</strong>, offering key insights into memory and decision-making processes.</p>
<p><strong>Anne Collins (UC Berkeley)</strong> provided a thought-provoking counterpoint to standard reinforcement learning models. Her research suggests that <strong>certain human cognitive functions are better explained by a combination of working memory and habitual behaviors</strong> rather than classic reinforcement learning frameworks. This perspective challenges prevailing theories and opens new directions for understanding human learning.</p>
<p><strong>Gina Kuperberg (Tufts)</strong> brought an exciting cognitive neuroscience perspective, exploring <strong>language learning through the lens of modern artificial intelligence</strong>. In an era dominated by large language models, her work examines how human linguistic processing aligns (or diverges) from AI-driven models—a particularly relevant topic in today’s rapidly evolving research landscape.</p>
<p>Closing the symposium, <strong>Sabine Kastner (Princeton)</strong> delivered the <strong>Shukart Lecture</strong>, offering a fascinating retrospective on her career studying <strong>the neural mechanisms of attention</strong>. She emphasized the critical role of the <strong>higher-order thalamus</strong> in attentional control, providing a synthesis of two decades of groundbreaking research.</p>
<h2>More Than Just Talks</h2>
<p>Beyond the scientific discussions, the symposium fostered <strong>community engagement</strong>, with students actively introducing speakers, networking opportunities, and valuable interactions with Tufts leadership. These moments underscore the importance of symposia not just as venues for presenting research but also as spaces for fostering collaboration, mentorship, and new ideas.</p>
<h3>Looking Ahead</h3>
<p>The success of this year’s symposium reaffirms the importance of interdisciplinary dialogue in neuroscience. As we push forward in understanding the brain, these events serve as a catalyst for <strong>new questions, collaborations, and discoveries</strong>. I look forward to seeing where these conversations lead and to many more engaging symposia in the future.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Paper Alert! Unlocking the Brain’s Flexibility: How the Thalamus Manages Uncertainty</title>
		<link>https://michaelhalassa.net/paper-alert-unlocking-the-brains-flexibility-how-the-thalamus-manages-uncertainty/</link>
		
		<dc:creator><![CDATA[michaelhalassa]]></dc:creator>
		<pubDate>Thu, 05 Dec 2024 16:50:09 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://michaelhalassa.net/?p=737</guid>

					<description><![CDATA[The brain’s ability to adapt to a constantly changing world is one of its most remarkable features. Cognitive flexibility—the capacity to shift strategies and update decision-making when circumstances change—is essential for navigating everyday life. This a particularly difficult problem because the world does not come with an operating manual, and many of the signals we [&#8230;]]]></description>
										<content:encoded><![CDATA[<p style="font-weight: 400;">The brain’s ability to adapt to a constantly changing world is one of its most remarkable features. Cognitive flexibility—the capacity to shift strategies and update decision-making when circumstances change—is essential for navigating everyday life. This a particularly difficult problem because the world does not come with an operating manual, and many of the signals we encounter are ambiguous. Yes, the world is constantly sending us mixed signals, so how do we know when to switch strategy? In our study, published in <em>Nature</em>, we discover neural processes that enable such adaptability, and identify a critical role for the thalamus in uncertainty processing.</p>
<p style="font-weight: 400;"><strong>A Window into Uncertainty: The Prefrontal-Thalamic Connection</strong></p>
<p style="font-weight: 400;">Our work focuses on how the <strong>prefrontal cortex</strong> and <strong>thalamus</strong> interact to manage uncertainty and enable flexible behavioral responses. Using tree shrews as a model, we designed a hierarchical rule-switching task to test how these animals adapt their decisions in the face of conflicting or ambiguous cues. This task mirrors real-world decision-making scenarios, such as deciding whether a failed strategy is due to poor execution or a fundamental change in circumstances.</p>
<p style="font-weight: 400;">Tree shrews demonstrated remarkable flexibility in these tasks, which correlated with dynamic activity in the transthalamic circuit. Specifically, the thalamus appears to mediate uncertainty by distinguishing between errors caused by sensory noise and those signaling environmental shifts. This &#8220;uncertainty filter&#8221; ensures that the brain efficiently determines whether to persist with a chosen strategy or adapt to a new one.</p>
<h3>The complementarity of prefrontal and thalamic circuitry</h3>
<p style="font-weight: 400;">This role for the thalamus complements that of the prefrontal cortex. Prefrontal neurons exhibit M<strong>ixed selectivity</strong>, the ability of neurons to respond to multiple task-relevant features, allowing the brain to integrate information from diverse sources efficiently. This property is ubiquitous across species and brain regions, supporting tasks from basic sensory discrimination to complex decision-making. By leveraging mixed selectivity, the prefrontal cortex achieves scalable and flexible computations. For example, neurons may simultaneously encode both the degree of conflict in a task and the expected reward, enabling rapid and context-appropriate responses. However, this encoding scheme may come with limitations, both in terms of controllability and signal propagation. The finding that the thalamus may demix cortical signals and thereby isolate different forms of uncertainty while also broadcasting these dimixed signals between prefrontal areas is the main finding of the paper. These distinct features of cortical and thalamic circuits are likely related to their architectural attributes—the cortex has internal recurrent excitatory connectivity, while the thalamus does not.</p>
<h3>Implications for Mental Health and Beyond</h3>
<p style="font-weight: 400;">Our findings extend beyond basic neuroscience, offering insights into cognitive disorders like <strong>schizophrenia</strong> and <strong>ADHD</strong>, where flexibility often breaks down. For instance, disruptions in transthalamic communication might underlie the rigid or maladaptive decision-making observed in these conditions. Understanding these mechanisms could inspire novel therapeutic interventions aimed at restoring adaptive decision-making in affected individuals.</p>
<p style="font-weight: 400;">In addition, this research highlights the thalamus as a critical node in cognitive networks—a stark contrast to its traditional view as a sensory relay center. By showing how the thalamus supports higher-order cognition, our study emphasizes the need for a paradigm shift in how we think about its role in the brain.</p>
<h3>Broader Implications for Neuroscience</h3>
<p style="font-weight: 400;">This study contributes to a growing recognition of the brain’s <strong>flexible networks</strong>—dynamic collaborations between regions that balance stability and adaptability. These findings align with previous research on thalamic contributions to attention and decision-making, suggesting that the thalamus might act as a “gatekeeper” for cognitive processes.</p>
<p style="font-weight: 400;">Moving forward, our research aims to explore how these circuits are modulated by neuromodulators like dopamine and acetylcholine, which are known to play roles in attention and learning. We also plan to investigate whether similar mechanisms operate in humans using advanced imaging and computational modeling techniques.</p>
<h3>From Laboratory to Life</h3>
<p style="font-weight: 400;">The translational potential of this research is immense. By understanding how the prefrontal-thalamic circuit processes uncertainty, we can design targeted interventions to improve decision-making in psychiatric disorders. These findings also inspire broader applications in artificial intelligence, where mimicking the brain’s adaptability could enhance machine learning algorithms.</p>
<h3>Closing Thoughts</h3>
<p style="font-weight: 400;">Our work provides a glimpse into the neural mechanisms that make cognitive flexibility possible. By showing how the prefrontal cortex and thalamus collaborate to resolve uncertainty, we hope to inspire future research into how these circuits can be harnessed to improve both mental health and technology.</p>
<p style="font-weight: 400;">This paper reflects years of collaboration and exploration, highlighting the power of basic neuroscience to answer profound questions about the human experience.</p>
<p style="font-weight: 400;">References:</p>
<p style="font-weight: 400;">The paper: Lam, N. H., Mukherjee, A., Wimmer, R. D., Nassar, M. R., Chen, Z. S., &amp; Halassa, M. M. (2024). Prefrontal transthalamic uncertainty processing drives flexible switching. <em>Nature</em>, 10.1038/s41586-024-08180-8. Advance online publication. <a href="https://doi.org/10.1038/s41586-024-08180-8" target="_blank" rel="noopener">https://doi.org/10.1038/s41586-024-08180-8</a></p>
<p><span style="font-weight: 400;">Media Coverage: <a href="https://now.tufts.edu/2024/11/13/teaching-ai-rules-brain" target="_blank" rel="noopener">https://now.tufts.edu/2024/11/13/teaching-ai-rules-brain</a></span></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
