Observations
observations
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms
Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.
Movements and engagement during decision-making
When experts are immersed in a task, a natural assumption is that their brains prioritize task-related activity. Accordingly, most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. Surprisingly, we observed that during decision-making, the cortex-wide activity of multiple cell types is dominated by movements, especially “uninstructed movements”, that are spontaneously expressed. These observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity. To understand the relationship between these movements and decision-making, we examined the movements more closely. We tested whether the magnitude or the timing of the movements was correlated with decision-making performance. To do this, we partitioned movements into two groups: task-aligned movements that were well predicted by task events (such as the onset of the sensory stimulus or choice) and task independent movement (TIM) that occurred independently of task events. TIM had a reliable, inverse correlation with performance in head-restrained mice and freely moving rats. This hinted that the timing of spontaneous movements could indicate periods of disengagement. To confirm this, we compared TIM to the latent behavioral states recovered by a hidden Markov model with Bernoulli generalized linear model observations (GLM-HMM) and found these, again, to be inversely correlated. Finally, we examined the impact of these behavioral states on neural activity. Surprisingly, we found that the same movement impacts neural activity more strongly when animals are disengaged. An intriguing possibility is that these larger movement signals disrupt cognitive computations, leading to poor decision-making performance. Taken together, these observations argue that movements and cognitionare closely intertwined, even during expert decision-making.
Neuroinflammation in Epilepsy: what have we learned from human brain tissue specimens ?
Epileptogenesis is a gradual and dynamic process leading to difficult-to-treat seizures. Several cellular, molecular, and pathophysiologic mechanisms, including the activation of inflammatory processes. The use of human brain tissue represents a crucial strategy to advance our understanding of the underlying neuropathology and the molecular and cellular basis of epilepsy and related cognitive and behavioral comorbidities, The mounting evidence obtained during the past decade has emphasized the critical role of inflammation in the pathophysiological processes implicated in a large spectrum of genetic and acquired forms of focal epilepsies. Dissecting the cellular and molecular mediators of the pathological immune responses and their convergent and divergent mechanisms, is a major requisite for delineating their role in the establishment of epileptogenic networks. The role of small regulatory molecules involved in the regulation of specific pro- and anti-inflammatory pathways and the crosstalk between neuroinflammation and oxidative stress will be addressed. The observations supporting the activation of both innate and adaptive immune responses in human focal epilepsy will be discussed and elaborated, highlighting specific inflammatory pathways as potential targets for antiepileptic, disease-modifying therapeutic strategies.
Enhancing Qualitative Coding with Large Language Models: Potential and Challenges
Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
Multimodal Blending
In this talk, I’ll consider how new ideas emerge from old ones via the process of conceptual blending. I’ll start by considering analogical reasoning in problem solving and the role conceptual blending plays in these problem-solving contexts. Then I’ll consider blending in multi-modal contexts, including timelines, memes (viz. image macros), and, if time allows, zoom meetings. I suggest mappings analogy researchers have traditionally considered superficial are often important for the development of novel abstractions. Likewise, the analogue portion of multimodal blends anchors their generative capacity. Overall, these observations underscore the extent to which meaning is a socially distributed process whose intermediate products are stored in cognitive artifacts such as text and digital images.
Hidden nature of seizures
How seizures emerge from the abnormal dynamics of neural networks within the epileptogenic tissue remains an enigma. Are seizures random events, or do detectable changes in brain dynamics precede them? Are mechanisms of seizure emergence identical at the onset and later stages of epilepsy? Is the risk of seizure occurrence stable, or does it change over time? A myriad of questions about seizure genesis remains to be answered to understand the core principles governing seizure genesis. The last decade has brought unprecedented insights into the complex nature of seizure emergence. It is now believed that seizure onset represents the product of the interactions between the process of a transition to seizure, long-term fluctuations in seizure susceptibility, epileptogenesis, and disease progression. During the lecture, we will review the latest observations about mechanisms of ictogenesis operating at multiple temporal scales. We will show how the latest observations contribute to the formation of a comprehensive theory of seizure genesis, and challenge the traditional perspectives on ictogenesis. Finally, we will discuss how combining conventional approaches with computational modeling, modern techniques of in vivo imaging, and genetic manipulation open prospects for exploration of yet hidden mechanisms of seizure genesis.
The Secret Bayesian Life of Ring Attractor Networks
Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.
Where do problem spaces come from? On metaphors and representational change
The challenges of problem solving do not exclusively lie in how to perform heuristic search, but they begin with how we understand a given task: How to cognitively represent the task domain and its components can determine how quickly someone is able to progress towards a solution, whether advanced strategies can be discovered, or even whether a solution is found at all. While this challenge of constructing and changing representations has been acknowledged early on in problem solving research, for the most part it has been sidestepped by focussing on simple, well-defined problems whose representation is almost fully determined by the task instructions. Thus, the established theory of problem solving as heuristic search in problem spaces has little to say on this. In this talk, I will present a study designed to explore this issue, by virtue of finding and refining an adequate problem representation being its main challenge. In this exploratory case study, it was investigated how pairs of participants acquaint themselves with a complex spatial transformation task in the domain of iterated mental paper folding over the course of several days. Participants have to understand the geometry of edges which occurs when repeatedly mentally folding a sheet of paper in alternating directions without the use of external aids. Faced with the difficulty of handling increasingly complex folds in light of limited cognitive capacity, participants are forced to look for ways in which to represent folds more efficiently. In a qualitative analysis of video recordings of the participants' behaviour, the development of their conceptualisation of the task domain was traced over the course of the study, focussing especially on their use of gesture and the spontaneous occurrence and use of metaphors in the construction of new representations. Based on these observations, I will conclude the talk with several theoretical speculations regarding the roles of metaphor and cognitive capacity in representational change.
Autologous hematopoietic stem cell transplantation as a highly effective treatment for multiple sclerosis - clinical and mechanistic observations
The functional connectome across temporal scales
The view of human brain function has drastically shifted over the last decade, owing to the observation that the majority of brain activity is intrinsic rather than driven by external stimuli or cognitive demands. Specifically, all brain regions continuously communicate in spatiotemporally organized patterns that constitute the functional connectome, with consequences for cognition and behavior. In this talk, I will argue that another shift is underway, driven by new insights from synergistic interrogation of the functional connectome using different acquisition methods. The human functional connectome is typically investigated with functional magnetic resonance imaging (fMRI) that relies on the indirect hemodynamic signal, thereby emphasizing very slow connectivity across brain regions. Conversely, more recent methodological advances demonstrate that fast connectivity within the whole-brain connectome can be studied with real-time methods such as electroencephalography (EEG). Our findings show that combining fMRI with scalp or intracranial EEG in humans, especially when recorded concurrently, paints a rich picture of neural communication across the connectome. Specifically, the connectome comprises both fast, oscillation-based connectivity observable with EEG, as well as extremely slow processes best captured by fMRI. While the fast and slow processes share an important degree of spatial organization, these processes unfold in a temporally independent manner. Our observations suggest that fMRI and EEG may be envisaged as capturing distinct aspects of functional connectivity, rather than intermodal measurements of the same phenomenon. Infraslow fluctuation-based and rapid oscillation-based connectivity of various frequency bands constitute multiple dynamic trajectories through a shared state space of discrete connectome configurations. The multitude of flexible trajectories may concurrently enable functional connectivity across multiple independent sets of distributed brain regions.
Intrinsic Rhythms in a Giant Single-Celled Organism and the Interplay with Time-Dependent Drive, Explored via Self-Organized Macroscopic Waves
Living Systems often seem to follow, in addition to external constraints and interactions, an intrinsic predictive model of the world — a defining trait of Anticipatory Systems. Here we study rhythmic behaviour in Caulerpa, a marine green alga, which appears to predict the day/night light cycle. Caulerpa consists of differentiated organs resembling leaves, stems and roots. While an individual can exceed a meter in size, it is a single multinucleated giant cell. Active transport has been hypothesized to play a key role in organismal development. It has been an open question in the literature whether rhythmic transport phenomena in this organism are of autonomous circadian nature. Using Raspberry-Pi cameras, we track over weeks the morphogenesis of tens of samples concurrently, while tracing at resolution of tens of seconds the variation of the green coverage. The latter reveals waves propagating over centimeters within few hours, and is attributed to chloroplast redistribution at whole-organism scale. Our observations of algal segments regenerating under 12-hour light/dark cycles indicate that the initiation of the waves precedes the external light change. Using time-frequency analysis, we find that the temporal spectrum of these green pulses contains a circadian period. The latter persists over days even under constant illumination, indicative of its autonomous nature. We further explore the system under non-circadian periods, to reveal how the spectral content changes in response. Time-keeping and synchronization are recurring themes in biological research at various levels of description — from subcellular components to ecological systems. We present a seemingly primitive living system that exhibits apparent anticipatory behaviour. This research offers quantitative constraints for theoretical frameworks of such systems.
What is Cognitive Neuropsychology Good For? An Unauthorized Biography
Abstract: There is no doubt that the study of brain damaged individuals has contributed greatly to our understanding of the mind/brain. Within this broad approach, cognitive neuropsychology accentuates the cognitive dimension: it investigates the structure and organization of perceptual, motor, cognitive, and language systems – prerequisites for understanding the functional organization of the brain – through the analysis of their dysfunction following brain damage. Significant insights have come specifically from this paradigm. But progress has been slow and enthusiasm for this approach has waned somewhat in recent years, and the use of existing findings to constrain new theories has also waned. What explains the current diminished status of cognitive neuropsychology? One reason may be failure to calibrate expectations about the effective contribution of different subfields of the study of the mind/brain as these are determined by their natural peculiarities – such factors as the types of available observations and their complexity, opportunity of access to such observations, the possibility of controlled experimentation, and the like. Here, I also explore the merits and limitations of cognitive neuropsychology, with particular focus on the role of intellectual, pragmatic, and societal factors that determine scientific practice within the broader domains of cognitive science/neuroscience. I conclude on an optimistic note about the continuing unique importance of cognitive neuropsychology: although limited to the study of experiments of nature, it offers a privileged window into significant aspects of the mind/brain that are not easily accessible through other approaches. Biography: Alfonso Caramazza's research has focussed extensively on how words and their meanings are represented in the brain. His early pioneering studies helped to reformulate our thinking about Broca's aphasia (not limited to production) and formalised the logic of patient-based neuropsychology. More recently he has been instrumental in reconsidering popular claims about embodied cognition.
Mechanisms of sleep-seizure interactions in tuberous sclerosis and other mTORpathies
An intriguing, relatively unexplored therapeutic avenue to investigate epilepsy is the interaction of sleep mechanisms and seizures. Multiple lines of clinical observations suggest a strong, bi-directional relationship between epilepsy and sleep. Epilepsy and sleep disorders are common comorbidities. Seizures occur more commonly in sleep in many types of epilepsy, and in turn, seizures can cause disrupted sleep. Sudden unexplained death in epilepsy (SUDEP) is strongly associated with sleep. The biological mechanisms underlying this relationship between seizures and sleep are poorly understood, but if better delineated, could offer novel therapeutic approaches to treating both epilepsy and sleep disorders. In this presentation, I will explore this sleep-seizure relationship in mouse models of epilepsy. First, I will present general approaches for performing detailed longitudinal sleep and vigilance state analysis in mice, including pre-weanling neonatal mice. I will then discuss recent data from my laboratory demonstrating an abnormal sleep phenotype in a mouse model of the genetic epilepsy, tuberous sclerosis complex (TSC), and its relationship to seizures. The potential mechanistic basis of sleep abnormalities and sleep-seizure interactions in this TSC model will be investigated, focusing on the role of the mechanistic target of rapamycin (mTOR) pathway and hypothalamic orexin, with potential therapeutic applications of mTOR inhibitors and orexin antagonists. Finally, similar sleep-seizure interactions and mechanisms will be extended to models of acquired epilepsy due to status epilepticus-related brain injury.
Human memory: mathematical models and experiments
I will present my recent work on mathematical modeling of human memory. I will argue that memory recall of random lists of items is governed by the universal algorithm resulting in the analytical relation between the number of items in memory and the number of items that can be successfully recalled. The retention of items in memory on the other hand is not universal and differs for different types of items being remembered, in particular retention curves for words and sketches is different even when sketches are made to only carry information about an object being drawn. I will discuss the putative reasons for these observations and introduce the phenomenological model predicting retention curves.
Inhibitory connectivity and computations in olfaction
We use the olfactory system and forebrain of (adult) zebrafish as a model to analyze how relevant information is extracted from sensory inputs, how information is stored in memory circuits, and how sensory inputs inform behavior. A series of recent findings provides evidence that inhibition has not only homeostatic functions in neuronal circuits but makes highly specific, instructive contributions to behaviorally relevant computations in different brain regions. These observations imply that the connectivity among excitatory and inhibitory neurons exhibits essential higher-order structure that cannot be determined without dense network reconstructions. To analyze such connectivity we developed an approach referred to as “dynamical connectomics” that combines 2-photon calcium imaging of neuronal population activity with EM-based dense neuronal circuit reconstruction. In the olfactory bulb, this approach identified specific connectivity among co-tuned cohorts of excitatory and inhibitory neurons that can account for the decorrelation and normalization (“whitening”) of odor representations in this brain region. These results provide a mechanistic explanation for a fundamental neural computation that strictly requires specific network connectivity.
Suboptimal human inference inverts the bias-variance trade-off for decisions with asymmetric evidence
Solutions to challenging inference problems are often subject to a fundamental trade-off between bias (being systematically wrong) that is minimized with complex inference strategies and variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to the frequently suboptimal inference strategies used by humans. We examined inference problems involving rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that were suboptimal relative to the Bayesian ideal observer. These suboptimal strategies reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but high bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but displayed weaker, near-normative bias. Our results yield new insights into the principles that govern individual differences in behavior that depends on rare-event inference, and, more generally, about the information-processing trade-offs that are sensitive to not just the complexity, but also the optimality of the inference process.
Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation
Ever since Bliss and Lømo discovered the phenomenon of long-term potentiation (LTP) in rabbit dentate gyrus in the 1960s, Hebb’s rule—neurons that fire together wire together—gained popularity to explain learning and memory. Accumulating evidence, however, suggests that neural activity is homeostatically regulated. Homeostatic mechanisms are mostly interpreted to stabilize network dynamics. However, recent theoretical work has shown that linking the activity of a neuron to its connectivity within the network provides a robust alternative implementation of Hebb’s rule, although entirely based on negative feedback. In this setting, both natural and artificial stimulation of neurons can robustly trigger network rewiring. We used computational models of plastic networks to simulate the complex temporal dynamics of network rewiring in response to external stimuli. In parallel, we performed optogenetic stimulation experiments in the mouse anterior cingulate cortex (ACC) and subsequently analyzed the temporal profile of morphological changes in the stimulated tissue. Our results suggest that the new theoretical framework combining neural activity homeostasis and structural plasticity provides a consistent explanation of our experimental observations.
Understanding the role of neural heterogeneity in learning
The brain has a hugely diverse and heterogeneous nature. The exact role of heterogeneity has been relatively little explored as most neural models tend to be largely homogeneous. We trained spiking neural networks with varying degrees of heterogeneity on complex real-world tasks and found that heterogeneity resulted in more stable and robust training and improved training performance, especially for tasks with a higher temporal structure. Moreover, the optimal distribution of parameters found by training was found to be similar to experimental observations. These findings suggest that heterogeneity is not simply a result of noisy biological processes, but it may play a crucial role for learning in complex, changing environments.
The role of high- and low-level factors in smooth pursuit of predictable and random motions
Smooth pursuit eye movements are among our most intriguing motor behaviors. They are able to keep the line of sight on smoothly moving targets with little or no overt effort or deliberate planning, and they can respond quickly and accurately to changes in the trajectory of motion of targets. Nevertheless, despite these seeming automatic characteristics, pursuit is highly sensitive to high-level factors, such as the choices made about attention, or beliefs about the direction of upcoming motion. Investigators have struggled for decades with the problem of incorporating both high- and low-level processes into a single coherent model. This talk will present an overview of the current state of efforts to incorporate high- and low-level influences, as well as new observations that add to our understanding of both types of influences. These observations (in contrast to much of the literature) focus on the directional properties of pursuit. Studies will be presented that show: (1) the direction of smooth pursuit made to pursue fields of noisy random dots depends on the relative reliability of the sensory signal and the expected motion direction; (2) smooth pursuit shows predictive responses that depend on the interpretation of cues that signal an impending collision; and (3) smooth pursuit during a change in target direction displays kinematic properties consistent with the well-known two-thirds power law. Implications for incorporating high- and low-level factors into the same framework will be discussed.
Adaptation-driven sensory detection and sequence memory
Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds. We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.
Bacterial rheotaxis in bulk and at surfaces
Individual bacteria transported in viscous flows, show complex interactions with flows and bounding surfaces resulting from their complex shape as well as their activity. Understanding these transport dynamics is crucial, as they impact soil contamination, transport in biological conducts or catheters, and constitute thus a serious health threat. Here we investigate the trajectories of individual E-coli bacteria in confined geometries under flow, using microfluidic model systems in bulk flows as well as close to surfaces using a novel Langrangian 3D tracking method. Combining experimental observations and modelling we elucidate the origin of upstream swimming, lateral drift or persistent transport along corners. [1] Junot et al, EPL, 126 (2019) 44003 [2] Mathijssen et al. 10:3 (2019) Nature Comm. [3] Figueroa-Morales et al., Soft Matter, 2015,11, 6284-6293 [4] Darnige et al. Review of Scientific Instruments 88, 055106 (2017) [5] Jing et al, Science Advances, 2020; 6 : eabb2012 [6] Figueroa-Morales et al, Sci. Adv. 2020; 6 : eaay0155, 2020, 10.1126/sciadv.aay0155
Co-tuned, balanced excitation and inhibition in olfactory memory networks
Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.
Recurrent network dynamics lead to interference in sequential learning
Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.
Frustrated Self-Assembly of Non-Euclidean Crystals of Nanoparticles
Self-organized complex structures in nature, e.g., viral capsids, hierarchical biopolymers, and bacterial flagella, offer efficiency, adaptability, robustness, and multi-functionality. Can we program the self-assembly of three-dimensional (3D) complex structures using simple building blocks, and reach similar or higher level of sophistication in engineered materials? Here we present an analytic theory for the self-assembly of polyhedral nanoparticles (NPs) based on their crystal structures in non-Euclidean space. We show that the unavoidable geometrical frustration of these particle shapes, combined with competing attractive and repulsive interparticle interactions, lead to controllable self-assembly of structures of complex order. Applying this theory to tetrahedral NPs, we find high-yield and enantiopure self-assembly of helicoidal ribbons, exhibiting qualitative agreement with experimental observations. We expect that this theory will offer a general framework for the self-assembly of simple polyhedral building blocks into rich complex morphologies with new material capabilities such as tunable optical activity, essential for multiple emerging technologies.
LAB COGNITION GOING WILD: Field experiments on vervet monkeys'
I will present field experiments on vervet monkeys testing physical and social cognition, with a focus on social learning. The understanding of the emergence of cultural behaviours in animals has advanced significantly with contributions from complementary approaches: natural observations and controlled field experiments. Experiments with wild vervet monkeys highlight that monkeys are selective about ‘who’ they learn from socially and that they will abandon personal foraging preferences in favour of group norms new to them. The reported findings highlight the feasibility to study cognition under field conditions.
STDP and the transfer of rhythmic signals in the brain
Rhythmic activity in the brain has been reported in relation to a wide range of cognitive processes. Changes in the rhythmic activity have been related to pathological states. These observations raise the question of the origin of these rhythms: can the mechanisms responsible for generation of these rhythms and that allow the propagation of the rhythmic signal be acquired via a process of learning? In my talk I will focus on spike timing dependent plasticity (STDP) and examine under what conditions this unsupervised learning rule can facilitate the propagation of rhythmic activity downstream in the central nervous system. Next, the I will apply the theory of STDP to the whisker system and demonstrate how STDP can shape the distribution of preferred phases of firing in a downstream population. Interestingly, in both these cases STDP dynamics does not relax to a fixed-point solution, rather the synaptic weights remain dynamic. Nevertheless, STDP allows for the system to retain its functionality in the face of continuous remodeling of the entire synaptic population.
Personality Evaluated: What Do Other People Really Think of You?
What do other people really think of you? In this talk, I highlight the unique perspective that other people have on the most consequential aspects of our personalities—how we treat others, our best and worst qualities, and our moral character. First, I compare how people thought they behaved with how they actually behaved in everyday life (based on observer ratings of unobtrusive audio recordings; 217 people, 2,519 observations). I show that when people think they are being kind (vs. rude), others do not necessarily agree. This suggests that people may have blind spots about how well they are treating others in the moment. Next, I compare what 463 people thought their own best and worst traits were with what their friends thought about them. The results reveal that friends are more likely to point out flaws in the prosocial and moral domains (e.g., “inconsiderate”, “selfish”, “manipulative”) than are people themselves. Does this imply that others might want us to be more moral? To find out, I compare what targets (N = 800) want to change about their own personalities with what their close others (N = 958) want to change about them. The results show that people don’t particularly want to be more moral, and their close others don’t want them to be more moral, either. I conclude with future directions on honest feedback as a pathway to self-insight and, ultimately, self-improvement.
Recurrent problems in spinal-cord and cerebellar circuits
One of the best established recurrent inhibitory pathways is the recurrent inhibition of mammalian motoneurons through Renshaw cells. Golgi cells form an inhibitory feedback circuit in the granular layer of cerebellum. Feedback inhibitory pathways are long established “textbook” elements of neural circuitry, but in both cases their functional role has not been well established. Here I will present some new observations on the function of recurrent inhibition in the spinal-cord, supporting the idea that this connection frequency tunes transmission of inputs through motoneurons. Secondly, I will discuss evidence that the function of Golgi cells is much more complex than classical studies based on circuit connectivity suggest.
The neural basis of human face identity recognition
The face is the primary source of information for recognizing the identity of people around us, but the neural basis of this astonishing ability remains largely unknown. In this presentation, I will define the fundamental problem of face identity recognition, arguing that there is a specific expertise of the human species at this function. I will then attempt to integrate a large corpus of observations from lesion studies, neuroimaging, human intracerebral recordings and stimulation into a coherent framework to shed light on the neural mechanisms of human face identity recognition.
More than mere association: Are some figure-ground organisation processes mediated by perceptual grouping mechanisms?
Figure-ground organisation and perceptual grouping are classic topics in Gestalt and perceptual psychology. They often appear alongside one another in introductory textbook chapters on perception and have a long history of investigation. However, they are typically discussed as separate processes of perceptual organisation with their own distinct phenomena and mechanisms. Here, I will propose that perceptual grouping and figure-ground organisation are strongly linked. In particular, perceptual grouping can provide a basis for, and may share mechanisms with, a wide range of figure-ground principles. To support this claim, I will describe a new class of figure-ground principles based on perceptual grouping between edges and demonstrate that this inter-edge grouping (IEG) is a powerful influence on figure-ground organisation. I will also draw support from our other results showing that grouping between edges and regions (i.e., edge-region grouping) can affect figure-ground organisation (Palmer & Brooks, 2008) and that contextual influences in figure-ground organisation can be gated by perceptual grouping between edges (Brooks & Driver, 2010). In addition to these modern observations, I will also argue that we can describe some classic figure-ground principles (e.g., symmetry, convexity, etc.) using perceptual grouping mechanisms. These results suggest that figure-ground organisation and perceptual grouping have more than a mere association under the umbrella topics of Gestalt psychology and perceptual organisation. Instead, perceptual grouping may provide a mechanism underlying a broad class of new and extant figure-ground principles.
Motor Cortex in Theory and Practice
A central question in motor physiology has been whether motor cortex activity resembles muscle activity, and if not, why not? Over fifty years, extensive observations have failed to provide a concise answer, and the topic remains much debated. To provide a different perspective, we employed a novel behavioral paradigm that affords extensive comparison between time-evolving neural and muscle activity. Single motor-cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ’trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Remarkably, we were able to predict motor cortex activity from muscle activity alone, by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling. Our results argue that motor cortex embeds descending commands in additional structure that ensure low tangling, and thus noise-robustness. The dominant structure in motor cortex may thus serve not a representational function (encoding specific variables) but a computational function: ensuring that outgoing commands can be generated reliably. Our results establish the utility of an emerging approach: understanding the structure of neural activity based on properties of population geometry that flow from normative principles such as noise robustness.
An inference perspective on meta-learning
While meta-learning algorithms are often viewed as algorithms that learn to learn, an alternative viewpoint frames meta-learning as inferring a hidden task variable from experience consisting of observations and rewards. From this perspective, learning to learn is learning to infer. This viewpoint can be useful in solving problems in meta-RL, which I’ll demonstrate through two examples: (1) enabling off-policy meta-learning, and (2) performing efficient meta-RL from image observations. I’ll also discuss how this perspective leads to an algorithm for few-shot image segmentation.
A novel hypothesis on the role of olfactory bulb granule cells
The role of granule cells in olfactory processing is surrounded by several enigmatic observations, such as the existence of reciprocal spines and the mechanisms for GABA release from them, the missing evidence for functional reciprocal connectivity, and the apparently low inhibitory drive of granule cells, both with respect to recurrent and lateral inhibition. Here, I summarize recent results with regard to GABA release, leading to a novel hypothesis on granule cell function that has the potential to resolve most of these enigmas. I predict that granule cells provide dynamically switched lateral inhibition between coactive glomerular columns and thus possibly a means of olfactory combinatorial coding.
The Desire to Know: Non-Instrumental Information Seeking in Mice
Animals are motivated to acquire knowledge. A particularly striking example is information seeking behavior: animals often seek out sensory cues that will inform them about the properties of uncertain future rewards, even when there is no way for them to use this information to influence the reward outcome, and even when this information comes at a considerable cost. Evidence from monkey electrophysiology and human fMRI studies suggests that orbitofrontal cortex and midbrain dopamine neurons represent the subjective value of knowledge during information seeking behavior. However, it remains unclear how the brain assigns value to information and how it integrates this with other incentives to drive behavior. We have therefore developed a task to test if information preferences are present in mice and study how informational value is imparted on stimuli. Mice are trained to enter a center port and receive an initial odor that instructs them to either go to an informative side port, go to an uninformative side port, or choose freely between them. The chosen side port then yields a second odor cue followed by a delayed probabilistic water reward. The informative port’s odor cue indicates whether the upcoming reward will be big or small. The uninformative port’s odor cue is uncorrelated with the trial outcome. Crucially, the two ports only differ in their odor cues, not in their water value since both offer identical probabilities of big and small rewards. We find that mice prefer the informative port. This preference is evident as a higher percentage choice of the informative port when given a free choice (67% +/- 1.7%, n = 14, p < 0.03), as well as by faster reaction times when instructed to go to the informative port (544ms +/- 21ms vs 795ms +/- 21ms, n = 14, p < 0.001). The preference for information is robust to within-animal reversals of informative and uninformative port locations, and, moreover, mice are willing to pay for information by choosing the informative port even if its reward amount is reduced to be substantially lower than the uninformative port. These behavioral observations suggest that odor stimuli are imparted with informational value as mice learn the information seeking task. We are currently imaging neural activity in orbitofrontal cortex with microendoscopes to identify changes in neural activity that may reflect value associated with the acquisition of knowledge.
Geometry of Neural Computation Unifies Working Memory and Planning
Cognitive tasks typically require the integration of working memory, contextual processing, and planning to be carried out in close coordination. However, these computations are typically studied within neuroscience as independent modular processes in the brain. In this talk I will present an alternative view, that neural representations of mappings between expected stimuli and contingent goal actions can unify working memory and planning computations. We term these stored maps contingency representations. We developed a "conditional delayed logic" task capable of disambiguating the types of representations used during performance of delay tasks. Human behaviour in this task is consistent with the contingency representation, and not with traditional sensory models of working memory. In task-optimized artificial recurrent neural network models, we investigated the representational geometry and dynamical circuit mechanisms supporting contingency-based computation, and show how contingency representation explains salient observations of neuronal tuning properties in prefrontal cortex. Finally, our theory generates novel and falsifiable predictions for single-unit and population neural recordings.
Rational thoughts in neural codes
First, we describe a new method for inferring the mental model of an animal performing a natural task. We use probabilistic methods to compute the most likely mental model based on an animal’s sensory observations and actions. This also reveals dynamic beliefs that would be optimal according to the animal’s internal model, and thus provides a practical notion of “rational thoughts.” Second, we construct a neural coding framework by which these rational thoughts, their computational dynamics, and actions can be identified within the manifold of neural activity. We illustrate the value of this approach by training an artificial neural network to perform a generalization of a widely used foraging task. We analyze the network’s behaviour to find rational thoughts, and successfully recover the neural properties that implemented those thoughts, providing a way of interpreting the complex neural dynamics of the artificial brain. Joint work with Zhengwei Wu, Minhae Kwon, Saurabh Daptardar, and Paul Schrater.