Emergence
emergence
Developmental emergence of personality
The Nature versus Nurture debate has generally been considered from the lens of genome versus experience dichotomy and has dominated our thinking about behavioral individuality and personality traits. In contrast, the role of nonheritable noise during brain development in behavioral variation is understudied. Using the Drosophila melanogaster visual system, I will discuss our efforts to dissect how individuality in circuit wiring emerges during development, and how that helps generate individual behavioral variation.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Emergence of behavioural individuality from global microstructure of the brain and learning
Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making
How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.
Consciousness in the cradle: on the emergence of infant experience
Although each of us was once a baby, infant consciousness remains mysterious and there is no received view about when, and in what form, consciousness first emerges. Some theorists defend a ‘late-onset’ view, suggesting that consciousness requires cognitive capacities which are unlikely to be in place before the child’s first birthday at the very earliest. Other theorists defend an ‘early-onset’ account, suggesting that consciousness is likely to be in place at birth (or shortly after) and may even arise during the third trimester. Progress in this field has been difficult, not just because of the challenges associated with procuring the relevant behavioral and neural data, but also because of uncertainty about how best to study consciousness in the absence of the capacity for verbal report or intentional behavior. This review examines both the empirical and methodological progress in this field, arguing that recent research points in favor of early-onset accounts of the emergence of consciousness.
NeuroAI from model to understanding: revealing the emergence of computations from the collective dynamics of interacting neurons
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Dynamics of cortical circuits: underlying mechanisms and computational implications
A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.
Motor contribution to auditory temporal predictions
Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.
Adaptation via innovation in the animal kingdom
Over the course of evolution, the human race has achieved a number of remarkable innovations, that have enabled us to adapt to and benefit from the environment ever more effectively. The ongoing environmental threats and health disasters of our world have now made it crucial to understand the cognitive mechanisms behind innovative behaviours. In my talk, I will present two research projects with examples of innovation-based behavioural adaptation from the taxonomic kingdom of animals, serving as a comparative psychological model for mapping the evolution of innovation. The first project focuses on the challenge of overcoming physical disability. In this study, we investigated an injured kea (Nestor notabilis) that exhibits an efficient, intentional, and innovative tool-use behaviour to compensate his disability, showing evidence for innovation-based adaptation to a physical disability in a non-human species. The second project focuses on the evolution of fire use from a cognitive perspective. Fire has been one of the most dominant ecological forces in human evolution; however, it is still unknown what capabilities and environmental factors could have led to the emergence of fire use. In the core study of this project, we investigated a captive population of Japanese macaques (Macaca fuscata) that has been regularly exposed to campfires during the cold winter months for over 60 years. Our results suggest that macaques are able to take advantage of the positive effects of fire while avoiding the dangers of flames and hot ashes, and exhibit calm behaviour around the bonfire. In addition, I will present a research proposal targeting the foraging behaviour of predatory birds in parts of Australia frequently affected by bushfires. Anecdotal reports suggest that some birds use burning sticks to spread the flames, a behaviour that has not been scientifically observed and evaluated. In summary, the two projects explore innovative behaviours along three different species groups, three different habitats, and three different ecological drivers, providing insights into the cognitive and behavioural mechanisms of adaptation through innovation.
Social Curiosity
In this lecture, I would like to share with the broad audience the empirical results gathered and the theoretical advancements made in the framework of the Lendület project entitled ’The cognitive basis of human sociality’. The main objective of this project was to understand the mechanisms that enable the unique sociality of humans, from the angle of cognitive science. In my talk, I will focus on recent empirical evidence in the study of three fundamental social cognitive functions (social categorization, theory of mind and social learning; mainly from the empirical lenses of developmental psychology) in order to outline a theory that emphasizes the need to consider their interconnectedness. The proposal is that the ability to represent the social world along categories and the capacity to read others’ minds are used in an integrated way to efficiently assess the epistemic states of fellow humans by creating a shared representational space. The emergence of this shared representational space is both the result of and a prerequisite to efficient learning about the physical and social environment.
Hidden nature of seizures
How seizures emerge from the abnormal dynamics of neural networks within the epileptogenic tissue remains an enigma. Are seizures random events, or do detectable changes in brain dynamics precede them? Are mechanisms of seizure emergence identical at the onset and later stages of epilepsy? Is the risk of seizure occurrence stable, or does it change over time? A myriad of questions about seizure genesis remains to be answered to understand the core principles governing seizure genesis. The last decade has brought unprecedented insights into the complex nature of seizure emergence. It is now believed that seizure onset represents the product of the interactions between the process of a transition to seizure, long-term fluctuations in seizure susceptibility, epileptogenesis, and disease progression. During the lecture, we will review the latest observations about mechanisms of ictogenesis operating at multiple temporal scales. We will show how the latest observations contribute to the formation of a comprehensive theory of seizure genesis, and challenge the traditional perspectives on ictogenesis. Finally, we will discuss how combining conventional approaches with computational modeling, modern techniques of in vivo imaging, and genetic manipulation open prospects for exploration of yet hidden mechanisms of seizure genesis.
Chandelier cells shine a light on the emergence of GABAergic circuits in the cortex
GABAergic interneurons are chiefly responsible for controlling the activity of local circuits in the cortex. Chandelier cells (ChCs) are a type of GABAergic interneuron that control the output of hundreds of neighbouring pyramidal cells through axo-axonic synapses which target the axon initial segment (AIS). Despite their importance in modulating circuit activity, our knowledge of the development and function of axo-axonic synapses remains elusive. We have investigated the emergence and plasticity of axo-axonic synapses in layer 2/3 of the somatosensory cortex (S1) and found that ChCs follow what appear to be homeostatic rules when forming synapses with pyramidal neurons. We are currently implementing in vivo techniques to image the process of axo-axonic synapse formation during development and uncover the dynamics of synaptogenesis and pruning at the AIS. In addition, we are using an all-optical approach to both activate and measure the activity of chandelier cells and their postsynaptic partners in the primary visual cortex (V1) and somatosensory cortex (S1) in mice, also during development. We aim to provide a structural and functional description of the emergence and plasticity of a GABAergic synapse type in the cortex.
Odd dynamics of living chiral crystals
The emergent dynamics exhibited by collections of living organisms often shows signatures of symmetries that are broken at the single-organism level. At the same time, organism development itself encompasses a well-coordinated sequence of symmetry breaking events that successively transform a single, nearly isotropic cell into an animal with well-defined body axis and various anatomical asymmetries. Combining these key aspects of collective phenomena and embryonic development, we describe here the spontaneous formation of hydrodynamically stabilized active crystals made of hundreds of starfish embryos that gather during early development near fluid surfaces. We describe a minimal hydrodynamic theory that is fully parameterized by experimental measurements of microscopic interactions among embryos. Using this theory, we can quantitatively describe the stability, formation and rotation of crystals and rationalize the emergence of mechanical properties that carry signatures of an odd elastic material. Our work thereby quantitatively connects developmental symmetry breaking events on the single-embryo level with remarkable macroscopic material properties of a novel living chiral crystal system.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
The evolution of computation in the brain: Insights from studying the retina
The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?
The function and localization of human consciousness
Scientific studies of consciousness can be roughly categorized into two directions: (1) How/where does consciousness emerge? (the mechanism of consciousness) and (2) Why is there consciousness? (the function of consciousness). I will summarize my past research on the quest for consciousness in these two directions.
Optimization at the Single Neuron Level: Prediction of Spike Sequences and Emergence of Synaptic Plasticity Mechanisms
Intelligent behavior depends on the brain’s ability to anticipate future events. However, the learning rules that enable neurons to predict and fire ahead of sensory inputs remain largely unknown. We propose a plasticity rule based on pre-dictive processing, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential. Neurons thereby amplify those synapses that maximally predict other synaptic inputs based on their temporal relations, which provide a solution to an optimization problem that can be implemented at the single-neuron level using only local information. Consequently, neurons learn sequences over long timescales and shift their spikes towards the first inputs in a sequence. We show that this mechanism can explain the development of anticipatory motion signaling and recall in the visual system. Furthermore, we demonstrate that the learning rule gives rise to several experimentally observed STDP (spike-timing-dependent plasticity) mechanisms. These findings suggest prediction as a guiding principle to orchestrate learning and synaptic plasticity in single neurons.
Remembering Immunity, Central regulation of peripheral immune processes
Thoughts and emotions can impact physiology. This connection is evident by the emergence of disease following stress, psychosomatic disorders, or recovery in response to placebo treatment. Nevertheless, this fundamental aspect of physiology remains largely unexplored. In this talk, I will focus on the brain’s involvement in regulating the peripheral immune response and explore the question of how the brain evaluates and represents the state of the immune system it regulates.
CNStalk: The emergence of High order Hubs in the Human Connectome
The balance of excitation and inhibition and a canonical cortical computation
Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. The balanced network model provides a self-consistent account of this observation: population rates dynamically adjust to yield a state in which all neurons are active at biological levels, with their E & I inputs tightly balanced. But global tight E/I balance predicts population responses with linear stimulus-dependence and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which only a localized subset of neurons are active and have balanced inputs. We analytically show that in networks of neurons with different stimulus selectivities, the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), which predicts a regime of loose, rather than tight, E/I balance. However, an understanding of the causal relationship between E/I balance and normalization in SSN and conditions under which SSN yields significant sublinear integration are lacking. For weak inputs, SSN integrates inputs supralinearly, while for very strong inputs it approaches a regime of tight balance. We show that when this latter regime is globally balanced, SSN cannot exhibit strong normalization for any input strength; thus, in SSN too, significant normalization requires localized balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation. Time allowing I will also cover our work extending a normative theoretical account of normalization which explains it as an example of efficient coding of natural stimuli. We show that when biological noise is accounted for, this theory makes the same prediction as the SSN: a transition to supralinear integration for weak stimuli.
Emergence of homochirality in large molecular systems
The question of the origin of homochirality of living matter, or the dominance of one handedness for all molecules of life across the entire biosphere, is a long-standing puzzle in the research on the Origin of Life. In the fifties, Frank proposed a mechanism to explain homochirality based on the properties of a simple autocatalytic network containing only a few chemical species. Following this work, chemists struggled to find experimental realizations of this model, possibly due to a lack of proper methods to identify autocatalysis [1]. In any case, a model based on a few chemical species seems rather limited, because prebiotic earth is likely to have consisted of complex ‘soups’ of chemicals. To include this aspect of the problem, we recently proposed a mechanism based on certain features of large out-of-equilibrium chemical networks [2]. We showed that a phase transition towards an homochiral state is likely to occur as the number of chiral species in the system becomes large or as the amount of free energy injected into the system increases. Through an analysis of large chemical databases, we showed that there is no need for very large molecules for chiral species to dominate over achiral ones; it already happens when molecules contain about 10 heavy atoms. We also analyzed the various conventions used to measure chirality and discussed the relative chiral signs adopted by different groups of molecules [3]. We then proposed a generalization of Frank’s model for large chemical networks, which we characterized using random matrix theory. This analysis includes sparse networks, suggesting that the emergence of homochirality is a robust and generic transition. References: [1] A. Blokhuis, D. Lacoste, and P. Nghe, PNAS (2020), 117, 25230. [2] G. Laurent, D. Lacoste, and P. Gaspard, PNAS (2021) 118 (3) e2012741118. [3] G. Laurent, D. Lacoste, and P. Gaspard, Proc. R. Soc. A 478:20210590 (2022).
Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach
Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.
Multimodal imaging in Dementia with Lewy bodies
Dementia with Lewy bodies (DLB) is a synucleinopathy but more than half of patients with DLB also have varying degrees of tau and amyloid-β co-pathology. Identifying and tracking the pathologic heterogeneity of DLB with multi-modal biomarkers is critical for the design of clinical trials that target each pathology early in the disease at a time when prevention or delaying the transition to dementia is possible. Furthermore, longitudinal evaluation of multi-modal biomarkers contributes to our understanding of the type and extent of the pathologic progression and serves to characterize the temporal emergence of the associated phenotypic expression. This talk will focus on the utility of multi-modal imaging in DLB.
Scaffolding up from Social Interactions: A proposal of how social interactions might shape learning across development
Social learning and analogical reasoning both provide exponential opportunities for learning. These skills have largely been studied independently, but my future research asks how combining skills across previously independent domains could add up to more than the sum of their parts. Analogical reasoning allows individuals to transfer learning between contexts and opens up infinite opportunities for innovation and knowledge creation. Its origins and development, so far, have largely been studied in purely cognitive domains. Constraining analogical development to non-social domains may mistakenly lead researchers to overlook its early roots and limit ideas about its potential scope. Building a bridge between social learning and analogy could facilitate identification of the origins of analogical reasoning and broaden its far-reaching potential. In this talk, I propose that the early emergence of social learning, its saliency, and its meaningful context for young children provides a springboard for learning. In addition to providing a strong foundation for early analogical reasoning, the social domain provides an avenue for scaling up analogies in order to learn to learn from others via increasingly complex and broad routes.
Individual differences in visual (mis)perception: a multivariate statistical approach
Common factors are omnipresent in everyday life, e.g., it is widely held that there is a common factor g for intelligence. In vision, however, there seems to be a multitude of specific factors rather than a strong and unique common factor. In my thesis, I first examined the multidimensionality of the structure underlying visual illusions. To this aim, the susceptibility to various visual illusions was measured. In addition, subjects were tested with variants of the same illusion, which differed in spatial features, luminance, orientation, or contextual conditions. Only weak correlations were observed between the susceptibility to different visual illusions. An individual showing a strong susceptibility to one visual illusion does not necessarily show a strong susceptibility to other visual illusions, suggesting that the structure underlying visual illusions is multifactorial. In contrast, there were strong correlations between the susceptibility to variants of the same illusion. Hence, factors seem to be illusion-specific but not feature-specific. Second, I investigated whether a strong visual factor emerges in healthy elderly and patients with schizophrenia, which may be expected from the general decline in perceptual abilities usually reported in these two populations compared to healthy young adults. Similarly, a strong visual factor may emerge in action video gamers, who often show enhanced perceptual performance compared to non-video gamers. Hence, healthy elderly, patients with schizophrenia, and action video gamers were tested with a battery of visual tasks, such as a contrast detection and orientation discrimination task. As in control groups, between-task correlations were weak in general, which argues against the emergence of a strong common factor for vision in these populations. While similar tasks are usually assumed to rely on similar neural mechanisms, the performances in different visual tasks were only weakly related to each other, i.e., performance does not generalize across visual tasks. These results highlight the relevance of an individual differences approach to unravel the multidimensionality of the visual structure.
NMC4 Short Talk: Untangling Contributions of Distinct Features of Images to Object Processing in Inferotemporal Cortex
How do humans perceive daily objects of various features and categorize these seemingly intuitive and effortless mental representations? Prior literature focusing on the role of the inferotemporal region (IT) has revealed object category clustering that is consistent with the semantic predefined structure (superordinate, ordinate, subordinate). It has however been debated whether the neural signals in the IT regions are a reflection of such categorical hierarchy [Wen et al.,2018; Bracci et al., 2017]. Visual attributes of images that correlated with semantic and category dimensions may have confounded these prior results. Our study aimed to address this debate by building and comparing models using the DNN AlexNet, to explain the variance in representational dissimilarity matrix (RDM) of neural signals in the IT region. We found that mid and high level perceptual attributes of the DNN model contribute the most to neural RDMs in the IT region. Semantic categories, as in predefined structure, were moderately correlated with mid to high DNN layers (r = [0.24 - 0.36]). Variance partitioning analysis also showed that the IT neural representations were mostly explained by DNN layers, while semantic categorical RDMs brought little additional information. In light of these results, we propose future works should focus more on the specific role IT plays in facilitating the extraction and coding of visual features that lead to the emergence of categorical conceptualizations.
NMC4 Short Talk: Synchronization in the Connectome: Metastable oscillatory modes emerge from interactions in the brain spacetime network
The brain exhibits a rich repertoire of oscillatory patterns organized in space, time and frequency. However, despite ever more-detailed characterizations of spectrally-resolved network patterns, the principles governing oscillatory activity at the system-level remain unclear. Here, we propose that the transient emergence of spatially organized brain rhythms are signatures of weakly stable synchronization between subsets of brain areas, naturally occurring at reduced collective frequencies due to the presence of time delays. To test this mechanism, we build a reduced network model representing interactions between local neuronal populations (with damped oscillatory response at 40Hz) coupled in the human neuroanatomical network. Following theoretical predictions, weakly stable cluster synchronization drives a rich repertoire of short-lived (or metastable) oscillatory modes, whose frequency inversely depends on the number of units, the strength of coupling and the propagation times. Despite the significant degree of reduction, we find a range of model parameters where the frequencies of collective oscillations fall in the range of typical brain rhythms, leading to an optimal fit of the power spectra of magnetoencephalographic signals from 89 heathy individuals. These findings provide a mechanistic scenario for the spontaneous emergence of frequency-specific long-range phase-coupling observed in magneto- and electroencephalographic signals as signatures of resonant modes emerging in the space-time structure of the Connectome, reinforcing the importance of incorporating realistic time delays in network models of oscillatory brain activity.
Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models
The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
Transdiagnostic approaches to understanding neurodevelopment
Macroscopic brain organisation emerges early in life, even prenatally, and continues to develop through adolescence and into early adulthood. The emergence and continual refinement of large-scale brain networks, connecting neuronal populations across anatomical distance, allows for increasing functional integration and specialisation. This process is thought crucial for the emergence of complex cognitive processes. But how and why is this process so diverse? We used structural neuroimaging collected from a large diverse cohort, to explore how different features of macroscopic brain organisation are associated with diverse cognitive trajectories. We used diffusion-weighted imaging (DWI) to construct whole-brain white-matter connectomes. A simulated attack on each child's connectome revealed that some brain networks were strongly organized around highly connected 'hubs'. The more children's brains were critically dependent on hubs, the better their cognitive skills. Conversely, having poorly integrated hubs was a very strong risk factor for cognitive and learning difficulties across the sample. We subsequently developed a computational framework, using generative network modelling (GNM), to model the emergence of this kind of connectome organisation. Relatively subtle changes within the wiring rules of this computational framework give rise to differential developmental trajectories, because of small biases in the preferential wiring properties of different nodes within the network. Finally, we were able to use this GNM to implicate the molecular and cellular processes that govern these different growth patterns.
Edge Computing using Spiking Neural Networks
Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).
Representation transfer and signal denoising through topographic modularity
To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.
Self-organized formation of discrete grid cell modules from smooth gradients
Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.
Mutation induced infection waves in diseases like COVID-19
After more than 4 million deaths worldwide, the ongoing vaccination to conquer the COVID-19 disease is now competing with the emergence of increasingly contagious mutations, repeatedly supplanting earlier strains. Following the near-absence of historical examples of the long-time evolution of infectious diseases under similar circumstances, models are crucial to exemplify possible scenarios. Accordingly, in the present work we systematically generalize the popular susceptible-infected-recovered model to account for mutations leading to repeatedly occurring new strains, which we coarse grain based on tools from statistical mechanics to derive a model predicting the most likely outcomes. The model predicts that mutations can induce a super exponential growth of infection numbers at early times, which self-amplify to giant infection waves which are caused by a positive feedback loop between infection numbers and mutations and lead to a simultaneous infection of the majority of the population. At later stages -- if vaccination progresses too slowly -- mutations can interrupt an ongoing decrease of infection numbers and can cause infection revivals which occur as single waves or even as whole wave trains featuring alternative periods of decreasing and increasing infection numbers. Our results might be useful for discussions regarding the importance of a release of vaccine-patents to reduce the risk of mutation-induced infection revivals but also to coordinate the release of measures following a downwards trend of infection numbers.
Reverse engineering Hydra
Hydra is an extraordinary creature. Continuously replacing itself, it can live indefinitely, performing a stable repertoire of reasonably sophisticated behaviors. This remarkable stability under plasticity may be due to the uniform nature of its nervous system, which consists of two apparently noncommunicating nerve net layers. We use modeling to understand the role of active muscles and biomechanics interact with neural activity to shape Hydra behaviour. We will discuss our findings and thoughts on how this simple nervous system may self-organize to produce purposeful behavior.
Physical Computation in Insect Swarms
Our world is full of living creatures that must share information to survive and reproduce. As humans, we easily forget how hard it is to communicate within natural environments. So how do organisms solve this challenge, using only natural resources? Ideas from computer science, physics and mathematics, such as energetic cost, compression, and detectability, define universal criteria that almost all communication systems must meet. We use insect swarms as a model system for identifying how organisms harness the dynamics of communication signals, perform spatiotemporal integration of these signals, and propagate those signals to neighboring organisms. In this talk I will focus on two types of communication in insect swarms: visual communication, in which fireflies communicate over long distances using light signals, and chemical communication, in which bees serve as signal amplifiers to propagate pheromone-based information about the queen’s location.
Collective Construction in Natural and Artificial Swarms
Natural systems provide both puzzles to unravel and demonstrations of what's possible. The natural world is full of complex systems of dynamically interchangeable, individually unreliable components that produce effective and reliable outcomes at the group level. A complementary goal to understanding the operation of such systems is that of being able to engineer artifacts that work in a similar way. One notable type of collective behavior is collective construction, epitomized by mound-building termites, which build towering, intricate mounds through the joint activity of millions of independent and limited insects. The artificial counterpart would be swarms of robots designed to build human-relevant structures. I will discuss work on both aspects of the problem, including studies of cues that individual termite workers use to help direct their actions and coordinate colony activity, and development of robot systems that build user-specified structures despite limited information and unpredictable variability in the process. These examples illustrate principles used by the insects and show how they can be applied in systems we create.
Tuning dumb neurons to task processing - via homeostasis
Homeostatic plasticity plays a key role in stabilizing neural network activity. But what is its role in neural information processing? We showed analytically how homeostasis changes collective dynamics and consequently information flow - depending on the input to the network. We then studied how input and homeostasis on a recurrent network of LIF neurons impacts information flow and task performance. We showed how we can tune the working point of the network, and found that, contrary to previous assumptions, there is not one optimal working point for a family of tasks, but each task may require its own working point.
Swarms for people
As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.
The Geometry of Decision-Making
Choosing among spatially distributed options is a central challenge for animals, from deciding among alternative potential food sources or refuges, to choosing with whom to associate. Here, using an integrated theoretical and experimental approach (employing immersive Virtual Reality), with both invertebrate and vertebrate models—the fruit fly, desert locust and zebrafish—we consider the recursive interplay between movement and collective vectorial integration in the brain during decision-making regarding options (potential ‘targets’) in space. We reveal that the brain repeatedly breaks multi-choice decisions into a series of abrupt (critical) binary decisions in space-time where organisms switch, spontaneously, from averaging vectorial information among, to suddenly excluding one of, the remaining options. This bifurcation process repeats until only one option—the one ultimately selected—remains. Close to each bifurcation the ‘susceptibility’ of the system exhibits a sharp increase, inevitably causing small differences among the remaining options to become amplified; a property that both comes ‘for free’ and is highly desirable for decision-making. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
The role of the primate prefrontal cortex in inferring the state of the world and predicting change
In an ever-changing environment, uncertainty is omnipresent. To deal with this, organisms have evolved mechanisms that allow them to take advantage of environmental regularities in order to make decisions robustly and adjust their behavior efficiently, thus maximizing their chances of survival. In this talk, I will present behavioral evidence that animals perform model-based state inference to predict environmental state changes and adjust their behavior rapidly, rather than slowly updating choice values. This model-based inference process can be described using Bayesian change-point models. Furthermore, I will show that neural populations in the prefrontal cortex accurately predict behavioral switches, and that the activity of these populations is associated with Bayesian estimates. In addition, we will see that learning leads to the emergence of a high-dimensional representational subspace that can be reused when the animals re-learn a previously learned set of action-value associations. Altogether, these findings highlight the role of the PFC in representing a belief about the current state of the world.
Do leader cells drive collective behavior in Dictyostelium Discoideum amoeba colonies?
Dictyostelium Discoideum (DD) are a fascinating single-cellular organism. When nutrients are plentiful, the DD cells act as autonomous individuals foraging their local vicinity. At the onset of starvation, a few (<0.1%) cells begin communicating with others by emitting a spike in the chemoattractant protein cyclic-AMP. Nearby cells sense the chemical gradient and respond by moving toward it and emitting a cyclic-AMP spike of their own. Cyclic-AMP activity increases over time, and eventually a spiral wave emerges, attracting hundreds of thousands of cells to an aggregation center. How DD cells go from autonomous individuals to a collective entity remains an open question for more than 60 years--a question whose answer would shed light on the emergence of multi-cellular life. Recently, trans-scale imaging has allowed the ability to sense the cyclic-AMP activity at both cell and colony levels. Using both the images as well as toy simulation models, this research aims to clarify whether the activity at the colony level is in fact initiated by a few cells, which may be deemed "leader" or "pacemaker" cells. In this talk, I will demonstrate the use of information-theoretic techniques to classify leaders and followers based on trajectory data, as well as to infer the domain of interaction of leader cells. We validate the techniques on toy models where leaders and followers are known, and then try to answer the question in real data--do leader cells drive collective behavior in DD colonies?
Understanding Perceptual Priors with Massive Online Experiments
One of the most important questions in psychology and neuroscience is understanding how the outside world maps to internal representations. Classical psychophysics approaches to this problem have a number of limitations: they mostly study low dimensional perpetual spaces, and are constrained in the number and diversity of participants and experiments. As ecologically valid perception is rich, high dimensional, contextual, and culturally dependent, these impediments severely bias our understanding of perceptual representations. Recent technological advances—the emergence of so-called “Virtual Labs”— can significantly contribute toward overcoming these barriers. Here I present a number of specific strategies that my group has developed in order to probe representations across a number of dimensions. 1) Massive online experiments can increase significantly the amount of participants and experiments that can be carried out in a single study, while also significantly diversifying the participant pool. We have developed a platform, PsyNet, that enables “experiments as code,” whereby the orchestration of computer servers, recruiting, compensation of participants, and data management is fully automated and every experiment can be fully replicated with one command line. I will demonstrate how PsyNet allows us to recruit thousands of participants for each study with a large number of control experimental conditions, significantly increasing our understanding of auditory perception. 2) Virtual lab methods also enable us to run experiments that are nearly impossible in a traditional lab setting. I will demonstrate our development of adaptive sampling, a set of behavioural methods that combine machine learning sampling techniques (Monte Carlo Markov Chains) with human interactions and allow us to create high-dimensional maps of perceptual representations with unprecedented resolution. 3) Finally, I will demonstrate how the aforementioned methods can be applied to the study of perceptual priors in both audition and vision, with a focus on our work in cross-cultural research, which studies how perceptual priors are influenced by experience and culture in diverse samples of participants from around the world.
The emergence of a ‘V1 like’ structure for soundscapes representing vision in the adult brain in the absence of visual experience
Agency through Physical Lenses
I will offer a broad-brush account of what explains the emergence of agents from a physics perspective, what sorts of conditions have to be in place for them to arise, and the essential features of agents when they are viewed through the lenses of physics. One implication will be a tight link to informational asymmetries associated with the thermodynamic gradient. Another will be a reversal of the direction of explanation from the one that is usually assumed in physical discussions. In in an evolved system, while it is true in some sense that the macroscopic behavior is the way it is because of the low-level dynamics, there is another sense in which the low-level dynamics is the way that it is because of the high-level behavior it supports. (More precisely and accurately, the constraints on the configuration of its components that define system as the kind of system it is are the way they are to exploit the low-level dynamics to produce the emergent behavior.) Another will be some insight into what might make human agency special.
Getting to know you: emerging neural representations during face familiarization
The successful recognition of familiar persons is critical for social interactions. Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. In three EEG experiments, we elucidated how representations of face familiarity and identity emerge from different qualities of familiarization: brief perceptual exposure (Experiment 1), extensive media familiarization (Experiment 2) and real-life personal familiarization (Experiment 3). Time-resolved representational similarity analysis revealed that familiarization quality has a profound impact on representations of face familiarity: they were strongly visible after personal familiarization, weaker after media familiarization, and absent after perceptual familiarization. Across all experiments, we found no enhancement of face identity representation, suggesting that familiarity and identity representations emerge independently during face familiarization. Our results emphasize the importance of extensive, real-life familiarization for the emergence of robust face familiarity representations, constraining models of face perception and recognition memory.
Evolving Neural Networks
Evolution has shaped neural circuits in a very specific manner, slowly and aimlessly incorporating computational innovations that increased the chances to survive and reproduce of the newly born species. The discoveries done by the Evolutionary Developmental (Evo-Devo) biology field during the last decades have been crucial for our understanding of the gradual emergence of such innovations. In turn, Computational Neuroscience practitioners modeling the brain are becoming increasingly aware of the need to build models that incorporate these innovations to replicate the computational strategies used by the brain to solve a given task. The goal of this workshop is to bring together experts from Systems and Computational Neuroscience, Machine Learning and the Evo-Devo field to discuss if and how knowing the evolutionary history of neural circuits can help us understand the way the brain works, as well as the relative importance of learned VS innate neural mechanisms.
The role of the complement pathway in post-traumatic sleep disruption and epilepsy
While traumatic brain injury (TBI) acutely disrupts the cortex, most TBI-related disabilities reflect secondary injuries that accrue over time. The thalamus is a likely site of secondary damage because of its reciprocal connections with the cortex. Using a mouse model of mild cortical injury that does not directly damage subcortical structures (mTBI), we found a chronic increase in C1q expression specifically in the corticothalamic circuit. Increased C1q expression co-localized with neuron loss and chronic inflammation, and correlated with disruption in sleep spindles and emergence of epileptic activities. Blocking C1q counteracted these outcomes, suggesting that C1q is a disease modifier in mTBI. Single-nucleus RNA sequencing demonstrated that microglia are the source of thalamic C1q. Since the corticothalamic circuit is important for cognition and sleep, which can be impaired by TBI, this circuit could be a new target for treating TBI-related disabilities
Flocking through complex environments
The spontaneous collective motion of self-propelled agents is ubiquitous in the natural world, and it often occurs in complex environments, be it bacteria and cells migrating through polymeric extracellular matrix or animal herds and human crowds navigating structured terrains. Much is known about flocking dynamics in pristine backgrounds, but how do spatio-temporal heterogeneities in the environment impact such collective self-organization? I will present two model systems, a colloidal active fluid negotiating disordered obstacles and a confined dense bacterial suspension in a viscoelastic medium, as controllable platforms to explore this question and highlight general mechanisms for active self-organization in complex environments. By combining theory and experiment, I will show how flocks on disordered substrates organize into a novel dynamic vortex glass phase, akin to vortex glasses in dirty superconductors, while the presence of viscoelasticity can calm the otherwise turbulent swarming of bacteria, allowing the emergence of a large scale coherent and even oscillatory vortex when confined on the millimetre scale.
The collective behavior of the clonal raider ant: computations, patterns, and naturalistic behavior
Colonies of ants and other eusocial insects are superorganisms, which perform sophisticated cognitive-like functions at the level of the group. In my talk I will review our efforts to establish the clonal raider ant Ooceraea biroi as a lab model system for the systematic study of the principles underlying collective information processing in ant colonies. I will use results from two separate projects to demonstrate the potential of this model system: In the first, we analyze the foraging behavior of the species, known as group raiding: a swift offensive response of a colony to the detection of a potential prey by a scout. By using automated behavioral tracking and detailed analysis we show that this behavior is closely related to the army ant mass raid, an iconic collective behavior in which hundreds of thousands of ants spontaneously leave the nest to go hunting, and that the evolutionary transition between the two can be explained by a change in colony size alone. In the second project, we study the emergence of a collective sensory response threshold in a colony. The sensory threshold is a fundamental computational primitive, observed across many biological systems. By carefully controlling the sensory environment and the social structure of the colonies we were able to show that it also appear in a collective context, and that it emerges out of a balance between excitatory and inhibitory interactions between ants. Furthermore, by using a mathematical model we predict that these two interactions can be mapped into known mechanisms of communication in ants. Finally, I will discuss the opportunities for understanding collective behavior that are opening up by the development of methods for neuroimaging and neurocontrol of our ants.
Recurrent network dynamics lead to interference in sequential learning
Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.
Anatomical decision-making by cellular collectives: bioelectrical pattern memories, regeneration, and synthetic living organisms
A key question for basic biology and regenerative medicine concerns the way in which evolution exploits physics toward adaptive form and function. While genomes specify the molecular hardware of cells, what algorithms enable cellular collectives to reliably build specific, complex, target morphologies? Our lab studies the way in which all cells, not just neurons, communicate as electrical networks that enable scaling of single-cell properties into collective intelligences that solve problems in anatomical feature space. By learning to read, interpret, and write bioelectrical information in vivo, we have identified some novel controls of growth and form that enable incredible plasticity and robustness in anatomical homeostasis. In this talk, I will describe the fundamental knowledge gaps with respect to anatomical plasticity and pattern control beyond emergence, and discuss our efforts to understand large-scale morphological control circuits. I will show examples in embryogenesis, regeneration, cancer, and synthetic living machines. I will also discuss the implications of this work for not only regenerative medicine, but also for fundamental understanding of the origin of bodyplans and the relationship between genomes and functional anatomy.
All optical interrogation of developing GABAergic circuits in vivo
The developmental journey of cortical interneurons encounters several activity-dependent milestones. During the early postnatal period in developing mice, GABAergic neurons are transient preferential recipients of thalamic inputs and undergo activity-dependent migration arrest, wiring and programmed cell-death. But cortical GABAergic neurons are also specified by very early developmental programs. For example, the earliest born GABAergic neurons develop into hub cells coordinating spontaneous activity in hippocampal slices. Despite their importance for the emergence of sensory experience, their role in coordinating network dynamics, and the role of activity in their integration into cortical networks, the collective in vivo dynamics of GABAergic neurons during the neonatal postnatal period remain unknown. Here, I will present data related to the coordinated activity between GABAergic cells of the mouse barrel cortex and hippocampus in non-anesthetized pups using the recent development of all optical methods to record and manipulate neuronal activity in vivo. I will show that the functional structure of developing GABAergic circuits is remarkably patterned, with segregated assemblies of prospective parvalbumin neurons and highly connected hub cells, both shaped by sensory-dependent processes.
LAB COGNITION GOING WILD: Field experiments on vervet monkeys'
I will present field experiments on vervet monkeys testing physical and social cognition, with a focus on social learning. The understanding of the emergence of cultural behaviours in animals has advanced significantly with contributions from complementary approaches: natural observations and controlled field experiments. Experiments with wild vervet monkeys highlight that monkeys are selective about ‘who’ they learn from socially and that they will abandon personal foraging preferences in favour of group norms new to them. The reported findings highlight the feasibility to study cognition under field conditions.
A generative network model of neurodevelopment
The emergence of large-scale brain networks, and their continual refinement, represent crucial developmental processes that can drive individual differences in cognition and which are associated with multiple neurodevelopmental conditions. But how does this organization arise, and what mechanisms govern the diversity of these developmental processes? There are many existing descriptive theories, but to date none are computationally formalized. We provide a mathematical framework that specifies the growth of a brain network over developmental time. Within this framework macroscopic brain organization, complete with spatial embedding of its organization, is an emergent property of a generative wiring equation that optimizes its connectivity by renegotiating its biological costs and topological values continuously over development. The rules that govern these iterative wiring properties are controlled by a set of tightly framed parameters, with subtle differences in these parameters steering network growth towards different neurodiverse outcomes. Regional expression of genes associated with the developmental simulations converge on biological processes and cellular components predominantly involved in synaptic signaling, neuronal projection, catabolic intracellular processes and protein transport. Together, this provides a unifying computational framework for conceptualizing the mechanisms and diversity of childhood brain development, capable of integrating different levels of analysis – from genes to cognition. (Pre-print: https://www.biorxiv.org/content/10.1101/2020.08.13.249391v1)
The emergence and plasticity of visual domain organization in the cerebral hemispheres
Emergence of long time scales in data-driven network models of zebrafish activity
How can neural networks exhibit persistent activity on time scales much larger than allowed by cellular properties? We address this question in the context of larval zebrafish, a model vertebrate that is accessible to brain-scale neuronal recording and high-throughput behavioral studies. We study in particular the dynamics of a bilaterally distributed circuit, the so-called ARTR, including hundreds neurons. ARTR exhibits slow antiphasic alternations between its left and right subpopulations, which can be modulated by the water temperature, and drive the coordinated orientation of swim bouts, thus organizing the fish spatial exploration. To elucidate the mechanism leading to the slow self-oscillation, we train a network graphical model (Ising) on neural recordings. Sampling the inferred model allows us to generate synthetic oscillatory activity, whose features correctly capture the observed dynamics. A mean-field analysis of the inferred model reveals the existence several phases; activated crossing of the barriers in between those phases controls the long time scales present in the network oscillations. We show in particular how the barrier heights and the nature of the phases vary with the water temperature.
Analogy as a Catalyst for Cumulative Cultural Evolution
Analogies, broadly defined, map novel concepts onto familiar concepts, making them essential for perception, reasoning, and communication. We argue that analogy-building served a critical role in the evolution of cumulative culture, by allowing humans to learn and transmit complex behavioural sequences that would otherwise be too cognitively demanding or opaque to acquire. The emergence of a protolanguage consisting of simple labels would have provided early humans with the cognitive tools to build explicit analogies and to communicate them to others. This focus on analogy-building can shed new light on the coevolution of cognition and culture, and addresses recent calls for better integration of the field of cultural evolution with cognitive science. This talk will address what cumulative cultural evolution is, how we define analogy-building, how analogy-building applies to cumulative cultural evolution, how analogy-building fits into language evolution, and the implications of analogy-building for causal understanding and cognitive evolution.
Emergence of Synfire Chains in Functional Multi-Layer Spiking Neural Networks
Bernstein Conference 2024
Quantitative modeling of the emergence of macroscopic grid-like representations
Bernstein Conference 2024
Emergence of functional circuits in the absence of neural activity
COSYNE 2022
Emergence of convolutional structure in neural circuits
COSYNE 2022
The emergence of fixed points in interlimb coordination underlies the learning of novel gaits in mice
COSYNE 2022
The emergence of gamma oscillations as a signature of gain control during context integration.
COSYNE 2022
Emergence of modular patterned activity in developing cortex through intracortical network interactions
COSYNE 2022
Emergence of time persistence in an interpretable data-driven neural network model
COSYNE 2022
Emergence of an orientation map in the mouse superior colliculus from stage III retinal waves
COSYNE 2022
Spontaneous emergence of magnitude comparison units in untrained deep neural networks
COSYNE 2022
Spontaneous emergence of magnitude comparison units in untrained deep neural networks
COSYNE 2022
Language emergence in reinforcement learning agents performing navigational tasks
COSYNE 2023
Emergence of heterogeneous weight-distributions in functionally similar neurons
COSYNE 2025
Emergence of robust persistent activity in premotor cortex across learning
COSYNE 2025
A global learning rate allows rapid emergence of near-optimal foraging across many options
COSYNE 2025
Rapid emergence of latent knowledge in the sensory cortex drives learning
COSYNE 2025
Cell type emergence in the developing medial entorhinal cortex is regulated by Bcl11b
FENS Forum 2024
Emergence of cerebellar spontaneous activity patterns during embryonic and postnatal development
FENS Forum 2024
Emergence of corpora amylacea in the aging human brain and in Alzheimer’s disease
FENS Forum 2024
Emergence of different spatial cognitive maps in CA1 for rats performing an episodic memory task using egocentric and allocentric navigational strategies
FENS Forum 2024
Emergence of NMDA-spikes: Unraveling network dynamics in pyramidal neurons
FENS Forum 2024
Re-emergence of T lymphocytes-mediated synaptopathy in progressive multiple sclerosis
FENS Forum 2024
Slow emergence of long-term stable spatial codes in the mouse dentate gyrus
FENS Forum 2024