Naturalistic
naturalistic
Kevin Bolding
We are recruiting lab personnel. If systems neuroscience at the intersection of olfaction and memory excites you, now is an excellent time to get in touch. Our goal is to discover fundamental rules and mechanisms that govern information storage and retrieval in neural systems. Our primary focus will be establishing the changes in neural circuit and population dynamics that correspond to odor recognition memory. To bring our understanding of this process to a new level of rigor we will apply quantitative statistical approaches to relate behavioral signatures of odor recognition to activity and plasticity in olfactory circuits. We will use in vivo electrophysiology and calcium imaging to capture the activity of large neural populations during olfactory experience, and we will apply cell-type specific perturbations of activity and plasticity to piece apart how specific circuit connections contribute.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Neural makers of lapses in attention during sustained ‘real-world’ task performance
Lapses in attention are ubiquitous and, unfortunately, the cause of many tragic accidents. One potential solution may be to develop assistance systems which can use objective, physiological signals to monitor attention levels and predict a lapse in attention before it occurs. As it stands, it is unclear which physiological signals are the most reliable markers of inattention, and even less is known about how reliably they will work in a more naturalistic setting. My project aims to address these questions across two experiments: a lab-based experiment and a more ‘real-world’ experiment. In this talk I will present the findings from my lab experiment, in which we combined EEG and pupillometry to detect markers of inattention during two computerised sustained attention tasks. I will then present the methods for my second, more ‘naturalistic’ experiment in which we use the same methods (EEG and pupillometry) to examine whether these markers can still be extracted from noisier data.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Distinctive features of experiential time: Duration, speed and event density
William James’s use of “time in passing” and “stream of thoughts” may be two sides of the same coin that emerge from the brain segmenting the continuous flow of information into discrete events. Departing from that idea, we investigated how the content of a realistic scene impacts two distinct temporal experiences: the felt duration and the speed of the passage of time. I will present you the results from an online study in which we used a well-established experimental paradigm, the temporal bisection task, which we extended to passage of time judgments. 164 participants classified seconds-long videos of naturalistic scenes as short or long (duration), or slow or fast (passage of time). Videos contained a varying number and type of events. We found that a large number of events lengthened subjective duration and accelerated the felt passage of time. Surprisingly, participants were also faster at estimating their felt passage of time compared to duration. The perception of duration heavily depended on objective duration, whereas the felt passage of time scaled with the rate of change. Altogether, our results support a possible dissociation of the mechanisms underlying the two temporal experiences.
From controlled environments to complex realities: Exploring the interplay between perceived minds and attention
In our daily lives, we perceive things as possessing a mind (e.g., people) or lacking one (e.g., shoes). Intriguingly, how much mind we attribute to people can vary, with real people perceived to have more mind than depictions of individuals, such as photographs. Drawing from a range of research methodologies, including naturalistic observation, mobile eye tracking, and surreptitious behavior monitoring, I discuss how various shades of mind influence human attention and behaviour. The findings suggest the novel concept that overt attention (where one looks) in real-life is fundamentally supported by covert attention (attending to someone out of the corner of one's eye).
Naturalistic violation of expectations reveal hierarchical surprise responses in the human brain
Geometry of concept learning
Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.
Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing
Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.
Language Representations in the Human Brain: A naturalistic approach
Natural language is strongly context-dependent and can be perceived through different sensory modalities. For example, humans can easily comprehend the meaning of complex narratives presented through auditory speech, written text, or visual images. To understand how complex language-related information is represented in the human brain there is a necessity to map the different linguistic and non-linguistic information perceived under different modalities across the cerebral cortex. To map this information to the brain, I suggest following a naturalistic approach and observing the human brain performing tasks in its naturalistic setting, designing quantitative models that transform real-world stimuli into specific hypothesis-related features, and building predictive models that can relate these features to brain responses. In my talk, I will present models of brain responses collected using functional magnetic resonance imaging while human participants listened to or read natural narrative stories. Using natural text and vector representations derived from natural language processing tools I will present how we can study language processing in the human brain across modalities, in different levels of temporal granularity, and across different languages.
Inter-individual variability in reward seeking and decision making: role of social life and consequence for vulnerability to nicotine
Inter-individual variability refers to differences in the expression of behaviors between members of a population. For instance, some individuals take greater risks, are more attracted to immediate gains or are more susceptible to drugs of abuse than others. To probe the neural bases of inter-individual variability we study reward seeking and decision-making in mice, and dissect the specific role of dopamine in the modulation of these behaviors. Using a spatial version of the multi-armed bandit task, in which mice are faced with consecutive binary choices, we could link modifications of midbrain dopamine cell dynamics with modulation of exploratory behaviors, a major component of individual characteristics in mice. By analyzing mouse behaviors in semi-naturalistic environments, we then explored the role of social relationships in the shaping of dopamine activity and associated beahviors. I will present recent data from the laboratory suggesting that changes in the activity of dopaminergic networks link social influences with variations in the expression of non-social behaviors: by acting on the dopamine system, the social context may indeed affect the capacity of individuals to make decisions, as well as their vulnerability to drugs of abuse, in particular nicotine.
Neural circuits for novel choices and for choice speed and accuracy changes in macaques
While most experimental tasks aim at isolating simple cognitive processes to study their neural bases, naturalistic behaviour is often complex and multidimensional. I will present two studies revealing previously uncharacterised neural circuits for decision-making in macaques. This was possible thanks to innovative experimental tasks eliciting sophisticated behaviour, bridging the human and non-human primate research traditions. Firstly, I will describe a specialised medial frontal circuit for novel choice in macaques. Traditionally, monkeys receive extensive training before neural data can be acquired, while a hallmark of human cognition is the ability to act in novel situations. I will show how this medial frontal circuit can combine the values of multiple attributes for each available novel item on-the-fly to enable efficient novel choices. This integration process is associated with a hexagonal symmetry pattern in the BOLD response, consistent with a grid-like representation of the space of all available options. We prove the causal role played by this circuit by showing that focussed transcranial ultrasound neuromodulation impairs optimal choice based on attribute integration and forces the subjects to default to a simpler heuristic decision strategy. Secondly, I will present an ongoing project addressing the neural mechanisms driving behaviour shifts during an evidence accumulation task that requires subjects to trade speed for accuracy. While perceptual decision-making in general has been thoroughly studied, both cognitively and neurally, the reasons why speed and/or accuracy are adjusted, and the associated neural mechanisms, have received little attention. We describe two orthogonal dimensions in which behaviour can vary (traditional speed-accuracy trade-off and efficiency) and we uncover independent neural circuits concerned with changes in strategy and fluctuations in the engagement level. The former involves the frontopolar cortex, while the latter is associated with the insula and a network of subcortical structures including the habenula.
Neural Codes for Natural Behaviors in Flying Bats
This talk will focus on the importance of using natural behaviors in neuroscience research – the “Natural Neuroscience” approach. I will illustrate this point by describing studies of neural codes for spatial behaviors and social behaviors, in flying bats – using wireless neurophysiology methods that we developed – and will highlight new neuronal representations that we discovered in animals navigating through 3D spaces, or in very large-scale environments, or engaged in social interactions. In particular, I will discuss: (1) A multi-scale neural code for very large environments, which we discovered in bats flying in a 200-meter long tunnel. This new type of neural code is fundamentally different from spatial codes reported in small environments – and we show theoretically that it is superior for representing very large spaces. (2) Rapid modulation of position × distance coding in the hippocampus during collision-avoidance behavior between two flying bats. This result provides a dramatic illustration of the extreme dynamism of the neural code. (3) Local-but-not-global order in 3D grid cells – a surprising experimental finding, which can be explained by a simple physics-inspired model, which successfully describes both 3D and 2D grids. These results strongly argue against many of the classical, geometrically-based models of grid cells. (4) I will also briefly describe new results on the social representation of other individuals in the hippocampus, in a highly social multi-animal setting. The lecture will propose that neuroscience experiments – in bats, rodents, monkeys or humans – should be conducted under evermore naturalistic conditions.
NMC4 Short Talk: Multiscale and extended retrieval of associative memory structures in a cortical model of local-global inhibition balance
Inhibitory neurons take on many forms and functions. How this diversity contributes to memory function is not completely known. Previous formal studies indicate inhibition differentiated by local and global connectivity in associative memory networks functions to rescale the level of retrieval of excitatory assemblies. However, such studies lack biological details such as a distinction between types of neurons (excitatory and inhibitory), unrealistic connection schemas, and non-sparse assemblies. In this study, we present a rate-based cortical model where neurons are distinguished (as excitatory, local inhibitory, or global inhibitory), connected more realistically, and where memory items correspond to sparse excitatory assemblies. We use this model to study how local-global inhibition balance can alter memory retrieval in associative memory structures, including naturalistic and artificial structures. Experimental studies have reported inhibitory neurons and their sub-types uniquely respond to specific stimuli and can form sophisticated, joint excitatory-inhibitory assemblies. Our model suggests such joint assemblies, as well as a distribution and rebalancing of overall inhibition between two inhibitory sub-populations – one connected to excitatory assemblies locally and the other connected globally – can quadruple the range of retrieval across related memories. We identify a possible functional role for local-global inhibitory balance to, in the context of choice or preference of relationships, permit and maintain a broader range of memory items when local inhibition is dominant and conversely consolidate and strengthen a smaller range of memory items when global inhibition is dominant. This model therefore highlights a biologically-plausible and behaviourally-useful function of inhibitory diversity in memory.
NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity
Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.
A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice
Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.
Appearance-based impression formation
Despite the common advice “not to judge a book by its cover”, we form impressions of character within a second of seeing a stranger’s face. These impressions have widespread consequences for society and for the economy, making it vital that we have a clear theoretical understanding of which impressions are important and how they are formed. In my talk, I outline a data-driven approach to answering these questions, starting by building models of the key dimensions underlying impressions of naturalistic face images. Overall, my findings suggest deeper links between the fields of face perception and social stereotyping than have previously been recognised.
Toward Naturalistic Paradigms of Agency
Voluntary control of behavior requires the ability to dynamically integrate internal states and external evidence to achieve one’s goals. However, neuroscientific studies of intentional action and critical philosophical commentary of that research have taken a rather narrow turn in recent years, focussing on the neural precursors of spontaneous simple actions as potential realizers of intentions. In this session, we show how the debate can benefit from incorporating other types of experimental approaches, focussing on agency in dynamic contexts.
Autopilot v0.4.0 - Distributing development of a distributed experimental framework
Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.
Neural circuits that support robust and flexible navigation in dynamic naturalistic environments
Tracking heading within an environment is a fundamental requirement for flexible, goal-directed navigation. In insects, a head-direction representation that guides the animal’s movements is maintained in a conserved brain region called the central complex. Two-photon calcium imaging of genetically targeted neural populations in the central complex of tethered fruit flies behaving in virtual reality (VR) environments has shown that the head-direction representation is updated based on self-motion cues and external sensory information, such as visual features and wind direction. Thus far, the head direction representation has mainly been studied in VR settings that only give flies control of the angular rotation of simple sensory cues. How the fly’s head direction circuitry enables the animal to navigate in dynamic, immersive and naturalistic environments is largely unexplored. I have developed a novel setup that permits imaging in complex VR environments that also accommodate flies’ translational movements. I have previously demonstrated that flies perform visually-guided navigation in such an immersive VR setting, and also that they learn to associate aversive optogenetically-generated heat stimuli with specific visual landmarks. A stable head direction representation is likely necessary to support such behaviors, but the underlying neural mechanisms are unclear. Based on a connectomic analysis of the central complex, I identified likely circuit mechanisms for prioritizing and combining different sensory cues to generate a stable head direction representation in complex, multimodal environments. I am now testing these predictions using calcium imaging in genetically targeted cell types in flies performing 2D navigation in immersive VR.
Neural mechanisms of active vision in the marmoset monkey
Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY
The collective behavior of the clonal raider ant: computations, patterns, and naturalistic behavior
Colonies of ants and other eusocial insects are superorganisms, which perform sophisticated cognitive-like functions at the level of the group. In my talk I will review our efforts to establish the clonal raider ant Ooceraea biroi as a lab model system for the systematic study of the principles underlying collective information processing in ant colonies. I will use results from two separate projects to demonstrate the potential of this model system: In the first, we analyze the foraging behavior of the species, known as group raiding: a swift offensive response of a colony to the detection of a potential prey by a scout. By using automated behavioral tracking and detailed analysis we show that this behavior is closely related to the army ant mass raid, an iconic collective behavior in which hundreds of thousands of ants spontaneously leave the nest to go hunting, and that the evolutionary transition between the two can be explained by a change in colony size alone. In the second project, we study the emergence of a collective sensory response threshold in a colony. The sensory threshold is a fundamental computational primitive, observed across many biological systems. By carefully controlling the sensory environment and the social structure of the colonies we were able to show that it also appear in a collective context, and that it emerges out of a balance between excitatory and inhibitory interactions between ants. Furthermore, by using a mathematical model we predict that these two interactions can be mapped into known mechanisms of communication in ants. Finally, I will discuss the opportunities for understanding collective behavior that are opening up by the development of methods for neuroimaging and neurocontrol of our ants.
Economic choice in naturalistic contexts
The emergence and modulation of time in neural circuits and behavior
Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior and cortical activity. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times can be accelerated or slowed down via gain modulation, flexibly induced by neuromodulation or perturbations; and how gain modulation may control response timing in the visual cortex. Finally, we will present a new biologically plausible way to generate a reservoir of multiple timescales in cortical circuits.
The emergence and modulation of time in neural circuits and behavior
Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.
Effects of Corticothalamic Feedback on Geniculate Responses to Naturalistic and Artificial Visual Stimuli
Modulation of C. elegans behavior by gut microbes
We are interested in understanding how microbes impact the behavior of host animals. Animal nervous systems likely evolved in environments richly surrounded by microbes, yet the impact of bacteria on nervous system function has been relatively under-studied. A challenge has been to identify systems in which both host and microbe are amenable to genetic manipulation, and which enable high-throughput behavioral screening in response to defined and naturalistic conditions. To accomplish these goals, we use an animal host — the roundworm C. elegans, which feeds on bacteria — in combination with its natural gut microbiome to identify inter-organismal signals driving host-microbe interactions and decision-making. C. elegans has some of the most extensive molecular, neurobiological and genetic tools of any multicellular eukaryote, and, coupled with the ease of gnotobiotic culture in these worms, represents a highly attractive system in which to study microbial influence on host behavior. Using this system, we discovered that commensal bacterial metabolites directly modulate nervous system function of their host. Beneficial gut microbes of the genus Providencia produce the neuromodulator tyramine in the C. elegans intestine. Using a combination of behavioral analysis, neurogenetics, metabolomics and bacterial genetics we established that bacterially produced tyramine is converted to octopamine in C. elegans, which acts directly in sensory neurons to reduce odor aversion and increase sensory preference for Providencia. We think that this type of sensory modulation may increase association of C. elegans with these microbes, increasing availability of this nutrient-rich food source for the worm and its progeny, while facilitating dispersal of the bacteria.
Brain dynamics underlying memory for continuous natural events
The world confronts our senses with a continuous stream of rapidly changing information. Yet, we experience life as a series of episodes or events, and in memory these pieces seem to become even further organized. How do we recall and give structure to this complex information? Recent studies have begun to examine these questions using naturalistic stimuli and behavior: subjects view audiovisual movies and then freely recount aloud their memories of the events. We find brain activity patterns that are unique to individual episodes, and which reappear during verbal recollection; robust generalization of these patterns across people; and memory effects driven by the structure of links between events in a narrative. These findings construct a picture of how we comprehend and recall real-world events that unfold continuously across time.
Schemas: events, spaces, semantics, and development
Understanding and remembering realistic experiences in our everyday lives requires activating many kinds of structured knowledge about the world, including spatial maps, temporal event scripts, and semantic relationships. My recent projects have explored the ways in which we build up this schematic knowledge (during a single experiment and across developmental timescales) and can strategically deploy them to construct event representations that we can store in memory or use to make predictions. I will describe my lab's ongoing work developing new experimental and analysis techniques for conducting functional MRI experiments using narratives, movies, poetry, virtual reality, and "memory experts" to study complex naturalistic schemas.
Toward a High-fidelity Artificial Retina for Vision Restoration
Electronic interfaces to the retina represent an exciting development in science, engineering, and medicine – an opportunity to exploit our knowledge of neural circuitry and function to restore or even enhance vision. However, although existing devices demonstrate proof of principle in treating incurable blindness, they produce limited visual function. Some of the reasons for this can be understood based on the precise and specific neural circuitry that mediates visual signaling in the retina. Consideration of this circuitry suggests that future devices may need to operate at single-cell, single-spike resolution in order to mediate naturalistic visual function. I will show large-scale multi-electrode recording and stimulation data from the primate retina indicating that, in some cases, such resolution is possible. I will also discuss cases in which it fails, and propose that we can improve artificial vision in such conditions by incorporating our knowledge of the visual system in bi-directional devices that adapt to the host neural circuitry. Finally, I will introduce the Stanford Artificial Retina Project, aimed at developing a retinal implant that more faithfully reproduces the neural code of the retina, and briefly discuss the implications for scientific investigation and for other neural interfaces of the future.
A control space for muscle state-dependent cortical influence during naturalistic motor behavior
COSYNE 2022
Perceptual and neural representations of naturalistic texture information in developing monkeys
COSYNE 2022
Perceptual and neural representations of naturalistic texture information in developing monkeys
COSYNE 2022
Cross-trial alignment reveals a low-dimensional cortical manifold of naturalistic speech production
COSYNE 2023
Human-like capacity limits in working memory models result from naturalistic sensory constraints
COSYNE 2023
Neural Manifolds Underlying Naturalistic Human Movements in Electrocorticography
COSYNE 2023
Statistical learning yields generalization and naturalistic behaviors in transitive inference
COSYNE 2023
Thoughtful faces: Using facial features to infer naturalistic cognitive processing across species
COSYNE 2023
Anatomically resolved oscillatory bursts orchestrate visual thalamocortical activity during naturalistic stimulus viewing
COSYNE 2025
Representations of naturalistic behavior drift over hours at the level of single neurons and population dynamics
COSYNE 2025
Sparse autoencoders for mechanistic insights on neural computation in naturalistic experiments
COSYNE 2025
How the auditory brainstem of bats detects regularity deviations in a naturalistic stimulation paradigm
FENS Forum 2024
Functional organization of the cognitive map for naturalistic reaching behavior in the motor cortex
FENS Forum 2024
Investigating embodied mind-wandering during a naturalistic anxiety-inducing film
FENS Forum 2024
Mesoscale dynamics of cell resolution cortical activity across brain areas in naturalistic goal-directed behavior
FENS Forum 2024
Mice adjust their eye position during viewing of naturalistic movies
FENS Forum 2024
Mouse saccades are cognitive state and task dependent in naturalistic and immersive decision-making tasks
FENS Forum 2024
Recording cortico-hypothalamic projection neuron activity in mice living in an automated naturalistic open-design maze
FENS Forum 2024
Reliability of reduced inter-subject functional connectivity during naturalistic movie-watching fMRI in autism — comparison of German and Finnish samples
FENS Forum 2024
Remarkably precise sound duration discrimination in rodents: Behavioral and neuronal insights from a naturalistic paradigm
FENS Forum 2024
Stability of social bonds over time: Measured in mice tested under semi-naturalistic conditions
FENS Forum 2024
Irregular optogenetic stimulation waveforms can induce naturalistic patterns of hippocampal spectral activity
Neuromatch 5