Complex
complex
Astrocytes: From Metabolism to Cognition
Different brain cell types exhibit distinct metabolic signatures that link energy economy to cellular function. Astrocytes and neurons, for instance, diverge dramatically in their reliance on glycolysis versus oxidative phosphorylation, underscoring that metabolic fuel efficiency is not uniform across cell types. A key factor shaping this divergence is the structural organization of the mitochondrial respiratory chain into supercomplexes. Specifically, complexes I (CI) and III (CIII) form a CI–CIII supercomplex, but the degree of this assembly varies by cell type. In neurons, CI is predominantly integrated into supercomplexes, resulting in highly efficient mitochondrial respiration and minimal reactive oxygen species (ROS) generation. Conversely, in astrocytes, a larger fraction of CI remains unassembled, freely existing apart from CIII, leading to reduced respiratory efficiency and elevated mitochondrial ROS production. Despite this apparent inefficiency, astrocytes boast a highly adaptable metabolism capable of responding to diverse stressors. Their looser CI–CIII organization allows for flexible ROS signaling, which activates antioxidant programs via transcription factors like Nrf2. This modular architecture enables astrocytes not only to balance energy production but also to support neuronal health and influence complex organismal behaviors.
The SIMple microscope: Development of a fibre-based platform for accessible SIM imaging in unconventional environments
Advancements in imaging speed, depth and resolution have made structured illumination microscopy (SIM) an increasingly powerful optical sectioning (OS) and super-resolution (SR) technique, but these developments remain inaccessible to many life science researchers due to the cost, optical complexity and delicacy of these instruments. We address these limitations by redesigning the optical path using in-line fibre components that are compact, lightweight and easily assembled in a “Plug & Play” modality, without compromising imaging performance. They can be integrated into an existing widefield microscope with a minimum of optical components and alignment, making OS-SIM more accessible to researchers with less optics experience. We also demonstrate a complete SR-SIM imaging system with dimensions 300 mm × 300 mm × 450 mm. We propose to enable accessible SIM imaging by utilising its compact, lightweight and robust design to transport it where it is needed, and image in “unconventional” environments where factors such as temperature and biosafety considerations currently limit imaging experiments.
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Developmental and evolutionary perspectives on thalamic function
Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.
Harnessing Big Data in Neuroscience: From Mapping Brain Connectivity to Predicting Traumatic Brain Injury
Neuroscience is experiencing unprecedented growth in dataset size both within individual brains and across populations. Large-scale, multimodal datasets are transforming our understanding of brain structure and function, creating opportunities to address previously unexplored questions. However, managing this increasing data volume requires new training and technology approaches. Modern data technologies are reshaping neuroscience by enabling researchers to tackle complex questions within a Ph.D. or postdoctoral timeframe. I will discuss cloud-based platforms such as brainlife.io, that provide scalable, reproducible, and accessible computational infrastructure. Modern data technology can democratize neuroscience, accelerate discovery and foster scientific transparency and collaboration. Concrete examples will illustrate how these technologies can be applied to mapping brain connectivity, studying human learning and development, and developing predictive models for traumatic brain injury (TBI). By integrating cloud computing and scalable data-sharing frameworks, neuroscience can become more impactful, inclusive, and data-driven..
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Decoding ketamine: Neurobiological mechanisms underlying its rapid antidepressant efficacy
Unlike traditional monoamine-based antidepressants that require weeks to exert effects, ketamine alleviates depression within hours, though its clinical use is limited by side effects. While ketamine was initially thought to work primarily through NMDA receptor (NMDAR) inhibition, our research reveals a more complex mechanism. We demonstrate that NMDAR inhibition alone cannot explain ketamine's sustained antidepressant effects, as other NMDAR antagonists like MK-801 lack similar efficacy. Instead, the (2R,6R)-hydroxynorketamine (HNK) metabolite appears critical, exhibiting antidepressant effects without ketamine's side effects. Paradoxically, our findings suggest an inverted U-shaped dose-response relationship where excessive NMDAR inhibition may actually impede antidepressant efficacy, while some level of NMDAR activation is necessary. The antidepressant actions of ketamine and (2R,6R)-HNK require AMPA receptor activation, leading to synaptic potentiation and upregulation of AMPA receptor subunits GluA1 and GluA2. Furthermore, NMDAR subunit GluN2A appears necessary and possibly sufficient for these effects. This research establishes NMDAR-GluN2A activation as a common downstream effector for rapid-acting antidepressants, regardless of their initial targets, offering promising directions for developing next-generation antidepressants with improved efficacy and reduced side effects.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
PhenoSign - Molecular Dynamic Insights
Do You Know Your Blood Glucose Level? You Probably Should! A single measurement is not enough to truly understand your metabolic health. Blood glucose levels fluctuate dynamically, and meaningful insights require continuous monitoring over time. But glucose is just one example. Many other molecular concentrations in the body are not static. Their variations are influenced by individual physiology and overall health. PhenoSign, a Swiss MedTech startup, is on a mission to become the leader in real-time molecular analysis of complex fluids, supporting clinical decision-making and life sciences applications. By providing real-time, in-situ molecular insights, we aim to advance medicine and transform life sciences research. This talk will provide an overview of PhenoSign’s journey since its inception in 2022—our achievements, challenges, and the strategic roadmap we are executing to shape the future of real-time molecular diagnostics.
Learning Representations of Complex Meaning in the Human Brain
Neural architectures: what are they good for anyway?
The brain has a highly complex structure in terms of cell types and wiring between different regions. What is it for, if anything? I'll start this talk by asking what might an answer to this question even look like given that we can't run an alternative universe where our brains are structured differently. (Preview: we can do this with models!) I'll then talk about some of our work in two areas: (1) does the modular structure of the brain contribute to specialisation of function? (2) how do different cell types and architectures contribute to multimodal sensory processing?
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”
Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
Understanding the complex behaviors of the ‘simple’ cerebellar circuit
Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.
Metabolic-functional coupling of parvalbmunin-positive GABAergic interneurons in the injured and epileptic brain
Parvalbumin-positive GABAergic interneurons (PV-INs) provide inhibitory control of excitatory neuron activity, coordinate circuit function, and regulate behavior and cognition. PV-INs are uniquely susceptible to loss and dysfunction in traumatic brain injury (TBI) and epilepsy but the cause of this susceptibility is unknown. One hypothesis is that PV-INs use specialized metabolic systems to support their high-frequency action potential firing and that metabolic stress disrupts these systems, leading to their dysfunction and loss. Metabolism-based therapies can restore PV-IN function after injury in preclinical TBI models. Based on these findings, we hypothesize that (1) PV-INs are highly metabolically specialized, (2) these specializations are lost after TBI, and (3) restoring PV-IN metabolic specializations can improve PV-IN function as well as TBI-related outcomes. Using novel single-cell approaches, we can now quantify cell-type-specific metabolism in complex tissues to determine whether PV-IN metabolic dysfunction contributes to the pathophysiology of TBI.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Probing neural population dynamics with recurrent neural networks
Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Charting the fetal development of neural complexity
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
Mitochondrial diversity in the mouse and human brain
The basis of the mind, of mental states, and complex behaviors is the flow of energy through microscopic and macroscopic brain structures. Energy flow through brain circuits is powered by thousands of mitochondria populating the inside of every neuron, glial, and other nucleated cell across the brain-body unit. This seminar will cover emerging approaches to study the mind-mitochondria connection and present early attempts to map the distribution and diversity of mitochondria across brain tissue. In rodents, I will present convergent multimodal evidence anchored in enzyme activities, gene expression, and animal behavior that distinct behaviorally-relevant mitochondrial phenotypes exist across large-scale mouse brain networks. Extending these findings to the human brain, I will present a developing systematic biochemical and molecular map of mitochondrial variation across cortical and subcortical brain structures, representing a foundation to understand the origin of complex energy patterns that give rise to the human mind.
Impact of personality profiles on emotion regulation efficiency: insights on experience, expressivity and physiological arousal
People are confronted every day with internal or external stimuli that can elicit emotions. In order to avoid negative ones, or to pursue individual aims, emotions are often regulated. The available emotion regulation strategies have been previously described as efficient or inefficient, but many studies highlighted that the strategies’ efficiency may be influenced by some different aspects such as personality. In this project, the efficiency of several strategies (e.g., reappraisal, suppression, distraction, …) has been studied according to personality profiles, by using the Big Five personality model and the Maladaptive Personality Trait Model. Moreover, the strategies’ efficiency has been tested according to the main emotional responses, namely experience, expressivity and physiological arousal. Results mainly highlighted the differential impact of strategies on individuals and a slight impact of personality. An important factor seems however to be the emotion parameter we are considering, potentially revealing a complex interplay between strategy, personality, and the considered emotion response. Based on these outcomes, further clinical aspects and recommendations will be also discussed.
Brain-heart interactions at the edges of consciousness
Various clinical cases have provided evidence linking cardiovascular, neurological, and psychiatric disorders to changes in the brain-heart interaction. Our recent experimental evidence on patients with disorders of consciousness revealed that observing brain-heart interactions helps to detect residual consciousness, even in patients with absence of behavioral signs of consciousness. Those findings support hypotheses suggesting that visceral activity is involved in the neurobiology of consciousness and sum to the existing evidence in healthy participants in which the neural responses to heartbeats reveal perceptual and self-consciousness. Furthermore, the presence of non-linear, complex, and bidirectional communication between brain and heartbeat dynamics can provide further insights into the physiological state of the patient following severe brain injury. These developments on methodologies to analyze brain-heart interactions open new avenues for understanding neural functioning at a large-scale level, uncovering that peripheral bodily activity can influence brain homeostatic processes, cognition, and behavior.
Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making
How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.
Towards Human Systems Biology of Sleep/Wake Cycles: Phosphorylation Hypothesis of Sleep
The field of human biology faces three major technological challenges. Firstly, the causation problem is difficult to address in humans compared to model animals. Secondly, the complexity problem arises due to the lack of a comprehensive cell atlas for the human body, despite its cellular composition. Lastly, the heterogeneity problem arises from significant variations in both genetic and environmental factors among individuals. To tackle these challenges, we have developed innovative approaches. These include 1) mammalian next-generation genetics, such as Triple CRISPR for knockout (KO) mice and ES mice for knock-in (KI) mice, which enables causation studies without traditional breeding methods; 2) whole-body/brain cell profiling techniques, such as CUBIC, to unravel the complexity of cellular composition; and 3) accurate and user-friendly technologies for measuring sleep and awake states, exemplified by ACCEL, to facilitate the monitoring of fundamental brain states in real-world settings and thus address heterogeneity in human.
Gut/Body interactions in health and disease
The adult intestine is a major barrier epithelium and coordinator of multi-organ functions. Stem cells constantly repair the intestinal epithelium by adjusting their proliferation and differentiation to tissue intrinsic as well as micro- and macro-environmental signals. How these signals integrate to control intestinal and whole-body homeostasis is largely unknown. Addressing this gap in knowledge is central to an improved understanding of intestinal pathophysiology and its systemic consequences. Combining Drosophila and mammalian model systems my laboratory has discovered fundamental mechanisms driving intestinal regeneration and tumourigenesis and outlined complex inter-organ signaling regulating health and disease. During my talk, I will discuss inter-related areas of research from my lab, including:1- Interactions between the intestine and its microenvironment influencing intestinal regeneration and tumourigenesis. 2- Long-range signals from the intestine impacting whole-body in health and disease.
Identifying mechanisms of cognitive computations from spikes
Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.
A recurrent network model of planning predicts hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.
From controlled environments to complex realities: Exploring the interplay between perceived minds and attention
In our daily lives, we perceive things as possessing a mind (e.g., people) or lacking one (e.g., shoes). Intriguingly, how much mind we attribute to people can vary, with real people perceived to have more mind than depictions of individuals, such as photographs. Drawing from a range of research methodologies, including naturalistic observation, mobile eye tracking, and surreptitious behavior monitoring, I discuss how various shades of mind influence human attention and behaviour. The findings suggest the novel concept that overt attention (where one looks) in real-life is fundamentally supported by covert attention (attending to someone out of the corner of one's eye).
Diffuse coupling in the brain - A temperature dial for computation
The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of largescale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
Cognitive Computational Neuroscience 2023
CCN is an annual conference that serves as a forum for cognitive science, neuroscience, and artificial intelligence researchers dedicated to understanding the computations that underlie complex behavior.
Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing
The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.
The Insights and Outcomes of the Wellcome-funded Waiting Times Project
Waiting is one of healthcare’s core experiences. It is there in the time it takes to access services; through the days, weeks, months or years needed for diagnoses; in the time that treatment takes; and in the elongated time-frames of recovery, relapse, remission and dying.Funded by the Wellcome Trust, our project opens up what it means to wait in and for healthcare by examining lived experiences, representations and histories of delayed and impeded time.In an era in which time is lived at increasingly different and complex tempos, Waiting Times looks to understand both the difficulties and vital significance of waiting for practices of care, offering a fundamental re-conceptualisation of the relation between time and care in contemporary thinking about health, illness, and wellbeing.
Computational models of spinal locomotor circuitry
To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.
NOTE: DUE TO A CYBER ATTACK OUR UNIVERSITY WEB SYSTEM IS SHUT DOWN - TALK WILL BE RESCHEDULED
The size and structure of the dendritic arbor play important roles in determining how synaptic inputs of neurons are converted to action potential output and how neurons are integrated in the surrounding neuronal network. Accordingly, neurons with aberrant morphology have been associated with neurological disorders. Dysmorphic, enlarged neurons are, for example, a hallmark of focal epileptogenic lesions like focal cortical dysplasia (FCDIIb) and gangliogliomas (GG). However, the regulatory mechanisms governing the development of dendrites are insufficiently understood. The evolutionary conserved Ste20/Hippo kinase pathway has been proposed to play an important role in regulating the formation and maintenance of dendritic architecture. A key element of this pathway, Ste20-like kinase (SLK), regulates cytoskeletal dynamics in non-neuronal cells and is strongly expressed throughout neuronal development. Nevertheless, its function in neurons is unknown. We found that during development of mouse cortical neurons, SLK has a surprisingly specific role for proper elaboration of higher, ≥ 3rd, order dendrites both in cultured neurons and living mice. Moreover, SLK is required to maintain excitation-inhibition balance. Specifically, SLK knockdown causes a selective loss of inhibitory synapses and functional inhibition after postnatal day 15, while excitatory neurotransmission is unaffected. This mechanism may be relevant for human disease, as dysmorphic neurons within human cortical malformations exhibit significant loss of SLK expression. To uncover the signaling cascades underlying the action of SLK, we combined phosphoproteomics, protein interaction screens and single cell RNA seq. Overall, our data identifies SLK as a key regulator of both dendritic complexity during development and of inhibitory synapse maintenance.
Diverse applications of artificial intelligence and mathematical approaches in ophthalmology
Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.
A recurrent network model of planning explains hippocampal replay and human behavior
When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.
Richly structured reward predictions in dopaminergic learning circuits
Theories from reinforcement learning have been highly influential for interpreting neural activity in the biological circuits critical for animal and human learning. Central among these is the identification of phasic activity in dopamine neurons as a reward prediction error signal that drives learning in basal ganglia and prefrontal circuits. However, recent findings suggest that dopaminergic prediction error signals have access to complex, structured reward predictions and are sensitive to more properties of outcomes than learning theories with simple scalar value predictions might suggest. Here, I will present recent work in which we probed the identity-specific structure of reward prediction errors in an odor-guided choice task and found evidence for multiple predictive “threads” that segregate reward predictions, and reward prediction errors, according to the specific sensory features of anticipated outcomes. Our results point to an expanded class of neural reinforcement learning algorithms in which biological agents learn rich associative structure from their environment and leverage it to build reward predictions that include information about the specific, and perhaps idiosyncratic, features of available outcomes, using these to guide behavior in even quite simple reward learning tasks.
Microbial modulation of zebrafish behavior and brain development
There is growing recognition that host-associated microbiotas modulate intrinsic neurodevelopmental programs including those underlying human social behavior. Despite this awareness, the fundamental processes are generally not understood. We discovered that the zebrafish microbiota is necessary for normal social behavior. By examining neuronal correlates of behavior, we found that the microbiota restrains neurite complexity and targeting of key forebrain neurons within the social behavior circuitry. The microbiota is also necessary for both localization and molecular functions of forebrain microglia, brain-resident phagocytes that remodel neuronal arbors. In particular, the microbiota promotes expression of complement signaling pathway components important for synapse remodeling. Our work provides evidence that the microbiota modulates zebrafish social behavior by stimulating microglial remodeling of forebrain circuits during early neurodevelopment and suggests molecular pathways for therapeutic interventions during atypical neurodevelopment.
The embodied brain
Understanding the brain is not only intrinsically fascinating, but also highly relevant to increase our well-being since our brain exhibits a power over the body that makes it capable both of provoking illness or facilitating the healing process. Bearing in mind this dark force, brain sciences have undergone and will undergo an important revolution, redefining its boundaries beyond the cranial cavity. During this presentation, we will discuss about the communication between the brain and other systems that shapes how we feel the external word and how we think. We are starting to unravel how our organs talk to the brain and how the brain talks back. That two-way communication encompasses a complex, body-wide system of nerves, hormones and other signals that will be discussed. This presentation aims at challenging a long history of thinking of bodily regulation as separate from "higher" mental processes. Four centuries ago, René Descartes famously conceptualized the mind as being separate from the body, it is time now to embody our mind.
Decoding the hippocampal oscillatory complexity to predict behavior
Epigenetic rewiring in Schinzel-Giedion syndrome
During life, a variety of specialized cells arise to grant the right and timely corrected functions of tissues and organs. Regulation of chromatin in defining specialized genomic regions (e.g. enhancers) plays a key role in developmental transitions from progenitors into cell lineages. These enhancers, properly topologically positioned in 3D space, ultimately guide the transcriptional programs. It is becoming clear that several pathologies converge in differential enhancer usage with respect to physiological situations. However, why some regulatory regions are physiologically preferred, while some others can emerge in certain conditions, including other fate decisions or diseases, remains obscure. Schinzel-Giedion syndrome (SGS) is a rare disease with symptoms such as severe developmental delay, congenital malformations, progressive brain atrophy, intractable seizures, and infantile death. SGS is caused by mutations in the SETBP1 gene that results in its accumulation further leading to the downstream accumulation of SET. The oncoprotein SET has been found as part of the histone chaperone complex INHAT that blocks the activity of histone acetyltransferases suggesting that SGS may (i) represent a natural model of alternative chromatin regulation and (ii) offer chances to study downstream (mal)adaptive mechanisms. I will present our work on the characterization of SGS in appropriate experimental models including iPSC-derived cultures and mouse.
Computational models and experimental methods for the human cornea
The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.
Beyond Volition
Voluntary actions are actions that agents choose to make. Volition is the set of cognitive processes that implement such choice and initiation. These processes are often held essential to modern societies, because they form the cognitive underpinning for concepts of individual autonomy and individual responsibility. Nevertheless, psychology and neuroscience have struggled to define volition, and have also struggled to study it scientifically. Laboratory experiments on volition, such as those of Libet, have been criticised, often rather naively, as focussing exclusively on meaningless actions, and ignoring the factors that make voluntary action important in the wider world. In this talk, I will first review these criticisms, and then look at extending scientific approaches to volition in three directions that may enrich scientific understanding of volition. First, volition becomes particularly important when the range of possible actions is large and unconstrained - yet most experimental paradigms involve minimal response spaces. We have developed a novel paradigm for eliciting de novo actions through verbal fluency, and used this to estimate the elusive conscious experience of generativity. Second, volition can be viewed as a mechanism for flexibility, by promoting adaptation of behavioural biases. This view departs from the tradition of defining volition by contrasting internally-generated actions with externally-triggered actions, and instead links volition to model-based reinforcement learning. By using the context of competitive games to re-operationalise the classic Libet experiment, we identified a form of adaptive autonomy that allows agents to reduce biases in their action choices. Interestingly, this mechanism seems not to require explicit understanding and strategic use of action selection rules, in contrast to classical ideas about the relation between volition and conscious, rational thought. Third, I will consider volition teleologically, as a mechanism for achieving counterfactual goals through complex problem-solving. This perspective gives a key role in mediating between understanding and planning on the one hand, and instrumental action on the other hand. Taken together, these three cognitive phenomena of generativity, flexibility, and teleology may partly explain why volition is such an important cognitive function for organisation of human behaviour and human flourishing. I will end by discussing how this enriched view of volition can relate to individual autonomy and responsibility.
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Self-perception: mechanosensation and beyond
Brain-organ communications play a crucial role in maintaining the body's physiological and psychological homeostasis, and are controlled by complex neural and hormonal systems, including the internal mechanosensory organs. However, the progress has been slow due to technical hurdles: the sensory neurons are deeply buried inside the body and are not readily accessible for direct observation, the projection patterns from different organs or body parts are complex rather than converging into dedicate brain regions, the coding principle cannot be directly adapted from that learned from conventional sensory pathways. Our lab apply the pipeline of "biophysics of receptors-cell biology of neurons-functionality of neural circuits-animal behaviors" to explore the molecular and neural mechanisms of self-perception. In the lab, we mainly focus on the following three questions: 1, The molecular and cellular basis for proprioception and interoception. 2, The circuit mechanisms of sensory coding and integration of internal and external information. 3, The function of interoception in regulating behavior homeostasis.
Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines
Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.
The strongly recurrent regime of cortical networks
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.
Explaining an asymmetry in similarity and difference judgments
Explicit similarity judgments tend to emphasize relational information more than do difference judgments. In this talk, I propose and test the hypothesis that this asymmetry arises because human reasoners represent the relation different as the negation of the relation same (i.e., as not-same). This proposal implies that processing difference is more cognitively demanding than processing similarity. Both for verbal comparisons between word pairs, and for visual comparisons between sets of geometric shapes, participants completed a triad task in which they selected which of two options was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard, either by virtue of featural similarity or by virtue of relational similarity. On ambiguous trials, one option was more featurally similar (but less relationally similar) to the standard, whereas the other was more relationally similar (but less featurally similar). Given the higher cognitive complexity of assessing relational similarity, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty accurately detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. These results provide convergent evidence for a representational asymmetry between the relations same and different.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Autopoiesis and Enaction in the Game of Life
Enaction plays a central role in the broader fabric of so-called 4E (embodied, embedded, extended, enactive) cognition. Although the origin of the enactive approach is widely dated to the 1991 publication of the book "The Embodied Mind" by Varela, Thompson and Rosch, many of the central ideas trace to much earlier work. Over 40 years ago, the Chilean biologists Humberto Maturana and Francisco Varela put forward the notion of autopoiesis as a way to understand living systems and the phenomena that they generate, including cognition. Varela and others subsequently extended this framework to an enactive approach that places biological autonomy at the foundation of situated and embodied behavior and cognition. I will describe an attempt to place Maturana and Varela's original ideas on a firmer foundation by studying them within the context of a toy model universe, John Conway's Game of Life (GoL) cellular automata. This work has both pedagogical and theoretical goals. Simple concrete models provide an excellent vehicle for introducing some of the core concepts of autopoiesis and enaction and explaining how these concepts fit together into a broader whole. In addition, a careful analysis of such toy models can hone our intuitions about these concepts, probe their strengths and weaknesses, and move the entire enterprise in the direction of a more mathematically rigorous theory. In particular, I will identify the primitive processes that can occur in GoL, show how these can be linked together into mutually-supporting networks that underlie persistent bounded entities, map the responses of such entities to environmental perturbations, and investigate the paths of mutual perturbation that these entities and their environments can undergo.
Central place foraging: how insects anchor spatial information
Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.
Learning to see stuff
Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.
PIEZO2 in somatosensory neurons coordinates gastrointestinal transit
The transit of food through the gastrointestinal tract is critical for nutrient absorption and survival, and the gastrointestinal tract has the ability to initiate motility reflexes triggered by luminal distention. This complex function depends on the crosstalk between extrinsic and intrinsic neuronal innervation within the intestine, as well as local specialized enteroendocrine cells. However, the molecular mechanisms and the subset of sensory neurons underlying the initiation and regulation of intestinal motility remain largely unknown. Here, we show that humans lacking PIEZO2 exhibit impaired bowel sensation and motility. Piezo2 in mouse dorsal root but not nodose ganglia is required to sense gut content, and this activity slows down food transit rates in the stomach, small intestine, and colon. Indeed, Piezo2 is directly required to detect colon distension in vivo. Our study unveils the mechanosensory mechanisms that regulate the transit of luminal contents throughout the gut, which is a critical process to ensure proper digestion, nutrient absorption, and waste removal. These findings set the foundation of future work to identify the highly regulated interactions between sensory neurons, enteric neurons and non- neuronal cells that control gastrointestinal motility.
Causal role of human frontopolar cortex in information integration during complex decision making
Bernstein Conference 2024
Complex spatial representations and computations emerge in a memory-augmented network that learns to navigate
Bernstein Conference 2024
Critical organisation for complex temporal tasks in neural networks
Bernstein Conference 2024
Multi-scale single-cycle analysis of cortex-wide wave dynamics reveals complex spatio-temporal structure
Bernstein Conference 2024
Origin and function of gamma oscillatory complexity in hippocampal networks
Bernstein Conference 2024
A robust machine learning pipeline for the analysis of complex nightingale songs
Bernstein Conference 2024
Unsupervised clustering of burst shapes reveals the increasing complexity of developing networks in vitro
Bernstein Conference 2024
Accurate Engagement of the Drosophila Central-Complex Compass During Head-Fixed Path-Constrained Navigation
COSYNE 2022
Chromatic contrast and angle of polarization signals are integrated in the Drosophila central complex
COSYNE 2022
Environmental complexity modulates the arbitration between deliberative and habitual decision-making
COSYNE 2022
Mice can do complex visual tasks
COSYNE 2022
Mice can do complex visual tasks
COSYNE 2022
Predicting behavior from complex hippocampal oscillatory codes
COSYNE 2022
Predicting behavior from complex hippocampal oscillatory codes
COSYNE 2022
A new tool for automated annotation of complex birdsong reveals dynamics of canary syntax rules
COSYNE 2022
A new tool for automated annotation of complex birdsong reveals dynamics of canary syntax rules
COSYNE 2022
Understanding rat behavior in a complex task via non-deterministic policies
COSYNE 2022
Understanding rat behavior in a complex task via non-deterministic policies
COSYNE 2022
Complex computation from developmental priors
COSYNE 2023
A Large Dataset of Macaque V1 Responses to Natural Images Revealed Complexity in V1 Neural Codes
COSYNE 2023
A mechanistic model for the formation of globally consistent maps of space in complex environments
COSYNE 2023
Thalamic maintenance of a complex sequential learned behavior: birdsong
COSYNE 2023
Beyond linear summation: Three-Body RNN for modeling complex neural and biological systems
COSYNE 2025
Bounds on the computational complexity of neurons due to dendritic morphology
COSYNE 2025
Complex Environments Drive Adaptive Hunting Strategies in Mice
COSYNE 2025
Coordinating control and planning for navigation on simplicial complex attractors
COSYNE 2025
A factorization model of V1 complex cells is selectively invariant
COSYNE 2025
ForageWorld: RL agents in complex foraging arenas develop internal maps for navigation and planning
COSYNE 2025
Probing Motion-Form Interactions in the Macaque Inferior Temporal Cortex and Artificial Neural Networks for Complex Scene Understanding
COSYNE 2025
Asymmetric metabolism controls the developing axon complexity in post-mitotic neurons
FENS Forum 2024
CDKL5: A novel regulator of the post-synaptic complex at the inhibitory synapse
FENS Forum 2024
Changes in mitochondrial respiration and mitochondrial dynamics induced by mitochondrial complex I inhibition in primary cortical neurons: Association with schizophrenia-like phenotype
FENS Forum 2024
Complex mechanisms responsible for the pressor response of angiotensin 1-7 injected into the rat paraventricular nucleus of the hypothalamus
FENS Forum 2024
Complex sublamination of cortical marginal zone in human and monkey at midgestation
FENS Forum 2024
Directionally tuned signals in mouse subicular complex and visual cortex during passive rotation using high-density probes
FENS Forum 2024
Diverse neuronal responses to visual precision in cat cortical area 21a: Unraveling the complexity of orientation processing
FENS Forum 2024
DivfreqBERT: Encoding distinct frequency ranges of brain dynamics based on the complexity of the brain
FENS Forum 2024
Dynamical complexity in engineered biological neuronal networks with directional and modular connections
FENS Forum 2024
Early life stress and living in a complex environment: Effects on social hierarchy and stress coping in mice
FENS Forum 2024
The effect of stimulus modality and stimulus complexity on associative equivalence learning in healthy humans
FENS Forum 2024