Experimental
experimental
Sainsbury Wellcome Centre, UCL
Applications are now open for the 2023 intake to our PhD Programme at the Sainsbury Wellcome Centre at University College London (UCL). This fully-funded 4-year programme offers students: • a comprehensive introduction to theoretical and systems neuroscience • intensive training in experimental techniques, including imaging, physiology, molecular, and behavioural methods in systems neuroscience • a supportive and collaborative environment with teaching by SWC faculty together with colleagues at the Gatsby Computational Neuroscience Unit and other affiliated institutions Based in central London, with the highest concentration of neuroscience research in the world, SWC students are fully funded and receive an annual stipend of £24,278, as well as funds to attend international courses or meetings. We also cover the cost of tuition fees for both home and international students. The SWC PhD programme is an opportunity to receive world-class training as a neuroscientist and launch an exciting career in academia or industry. Apply to join our pool of exceptional students from around the globe. More information on the SWC PhD programme, and details on how to apply, can be found on the website: https://www.sainsburywellcome.org/web/content/neuroscience-phd-programme If you have any queries about the SWC PhD programme, or the application process, please contact us: SWC-PhDprogramme@ucl.ac.uk.
Kenji Doya
Okinawa Institute of Science and Technology is calling for three types of Visiting Scholars: 1) Theoretical Sciences Visiting Program (TSVP): Visits of 3–12 months, research on theoretical projects. 2) Experimental Visiting Scholar Program (EVSP): Visits of 3–12 months by senior faculty, interdisciplinary projects preferred. 3) Domestic Visitors Program: Visits up to 3 months, possibly as multiple visits, in any scientific field. OIST is also seeking applications for up to 10 open faculty positions.
Susan Fischer
The 'Developmental Computational Psychiatry' lab and the W3 professorship 'Computational Psychiatry' led by Tobias Hauser at the University of Tübingen (Germany) is currently hiring new postdocs. The focus of the lab is to better understand the computational and neural mechanisms underlying decision making and learning, and how these processes go awry in patients with mental illnesses. The successful candidates will have the chance to work in a highly dynamic and inspiring environment and to collaborate closely with Prof Peter Dayan and the Max-Planck Institute for Biological Cybernetics. Concretely, we are looking for the following candidates: Postdoc with experimental & neuroimaging background, Postdoc with computational modelling background. More information about the positions can be found here: https://devcompsy.org/join-the-lab/. Interested candidates are encouraged to reach out to Tobias Hauser directly to informally discuss the positions.
Scaling Up Bioimaging with Microfluidic Chips
Explore how microfluidic chips can enhance your imaging experiments by increasing control, throughput, or flexibility. In this remote, personalized workshop, participants will receive expert guidance, support and chips to run tests on their own microscopes.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
The Promise of MitoQuinone Therapeutics in Experimental TBI
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Virtual and experimental approaches to the pathogenicity of SynGAP1 missense mutations
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Feedback-induced dispositional changes in risk preferences
Contrary to the original normative decision-making standpoint, empirical studies have repeatedly reported that risk preferences are affected by the disclosure of choice outcomes (feedback). Although no consensus has yet emerged regarding the properties and mechanisms of this effect, a widespread and intuitive hypothesis is that repeated feedback affects risk preferences by means of a learning effect, which alters the representation of subjective probabilities. Here, we ran a series of seven experiments (N= 538), tailored to decipher the effects of feedback on risk preferences. Our results indicate that the presence of feedback consistently increases risk-taking, even when the risky option is economically less advantageous. Crucially, risk-taking increases just after the instructions, before participants experience any feedback. These results challenge the learning account, and advocate for a dispositional effect, induced by the mere anticipation of feedback information. Epistemic curiosity and regret avoidance may drive this effect in partial and complete feedback conditions, respectively.
Optogenetic control of Nodal signaling patterns
Embryos issue instructions to their cells in the form of patterns of signaling activity. Within these patterns, the distribution of signaling in time and space directs the fate of embryonic cells. Tools to perturb developmental signaling with high resolution in space and time can help reveal how these patterns are decoded to make appropriate fate decisions. In this talk, I will present new optogenetic reagents and an experimental pipeline for creating designer Nodal signaling patterns in live zebrafish embryos. Our improved optoNodal reagents eliminate dark activity and improve response kinetics, without sacrificing dynamic range. We adapted an ultra-widefield microscopy platform for parallel light patterning in up to 36 embryos and demonstrated precise spatial control over Nodal signaling activity and downstream gene expression. Using this system, we demonstrate that patterned Nodal activation can initiate specification and internalization movements of endodermal precursors. Further, we used patterned illumination to generate synthetic signaling patterns in Nodal signaling mutants, rescuing several characteristic developmental defects. This study establishes an experimental toolkit for systematic exploration of Nodal signaling patterns in live embryos.
Experimental research in patients with migraine
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Distinctive features of experiential time: Duration, speed and event density
William James’s use of “time in passing” and “stream of thoughts” may be two sides of the same coin that emerge from the brain segmenting the continuous flow of information into discrete events. Departing from that idea, we investigated how the content of a realistic scene impacts two distinct temporal experiences: the felt duration and the speed of the passage of time. I will present you the results from an online study in which we used a well-established experimental paradigm, the temporal bisection task, which we extended to passage of time judgments. 164 participants classified seconds-long videos of naturalistic scenes as short or long (duration), or slow or fast (passage of time). Videos contained a varying number and type of events. We found that a large number of events lengthened subjective duration and accelerated the felt passage of time. Surprisingly, participants were also faster at estimating their felt passage of time compared to duration. The perception of duration heavily depended on objective duration, whereas the felt passage of time scaled with the rate of change. Altogether, our results support a possible dissociation of the mechanisms underlying the two temporal experiences.
Time perception in film viewing as a function of film editing
Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.
Brain-heart interactions at the edges of consciousness
Various clinical cases have provided evidence linking cardiovascular, neurological, and psychiatric disorders to changes in the brain-heart interaction. Our recent experimental evidence on patients with disorders of consciousness revealed that observing brain-heart interactions helps to detect residual consciousness, even in patients with absence of behavioral signs of consciousness. Those findings support hypotheses suggesting that visceral activity is involved in the neurobiology of consciousness and sum to the existing evidence in healthy participants in which the neural responses to heartbeats reveal perceptual and self-consciousness. Furthermore, the presence of non-linear, complex, and bidirectional communication between brain and heartbeat dynamics can provide further insights into the physiological state of the patient following severe brain injury. These developments on methodologies to analyze brain-heart interactions open new avenues for understanding neural functioning at a large-scale level, uncovering that peripheral bodily activity can influence brain homeostatic processes, cognition, and behavior.
Blood-brain barrier dysfunction in epilepsy: Time for translation
The neurovascular unit (NVU) consists of cerebral blood vessels, neurons, astrocytes, microglia, and pericytes. It plays a vital role in regulating blood flow and ensuring the proper functioning of neural circuits. Among other, this is made possible by the blood-brain barrier (BBB), which acts as both a physical and functional barrier. Previous studies have shown that dysfunction of the BBB is common in most neurological disorders and is associated with neural dysfunction. Our studies have demonstrated that BBB dysfunction results in the transformation of astrocytes through transforming growth factor beta (TGFβ) signaling. This leads to activation of the innate neuroinflammatory system, changes in the extracellular matrix, and pathological plasticity. These changes ultimately result in dysfunction of the cortical circuit, lower seizure threshold, and spontaneous seizures. Blocking TGFβ signaling and its associated pro-inflammatory pathway can prevent this cascade of events, reduces neuroinflammation, repairs BBB dysfunction, and prevents post-injury epilepsy, as shown in experimental rodents. To further understand and assess BBB integrity in human epilepsy, we developed a novel imaging technique that quantitatively measures BBB permeability. Our findings have confirmed that BBB dysfunction is common in patients with drug-resistant epilepsy and can assist in identifying the ictal-onset zone prior to surgery. Current clinical studies are ongoing to explore the potential of targeting BBB dysfunction as a novel treatment approach and investigate its role in drug resistance, the spread of seizures, and comorbidities associated with epilepsy.
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Piecing together the puzzle of emotional consciousness
Conscious emotional experiences are very rich in their nature, and can encompass anything ranging from the most intense panic when facing immediate threat, to the overwhelming love felt when meeting your newborn. It is then no surprise that capturing all aspects of emotional consciousness, such as intensity, valence, and bodily responses, into one theory has become the topic of much debate. Key questions in the field concern how we can actually measure emotions and which type of experiments can help us distill the neural correlates of emotional consciousness. In this talk I will give a brief overview of theories of emotional consciousness and where they disagree, after which I will dive into the evidence proposed to support these theories. Along the way I will discuss to what extent studying emotional consciousness is ‘special’ and will suggest several tools and experimental contrasts we have at our disposal to further our understanding on this intriguing topic.
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Bio-realistic multiscale modeling of cortical circuits
A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
Brain Connectivity Workshop
Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.
Self as Processes (BACN Mid-career Prize Lecture 2023)
An understanding of the self helps explain not only human thoughts, feelings, attitudes but also many aspects of everyday behaviour. This talk focuses on a viewpoint - self as processes. This viewpoint emphasizes the dynamics of the self that best connects with the development of the self over time and its realist orientation. We are combining psychological experiments and data mining to comprehend the stability and adaptability of the self across various populations. In this talk, I draw on evidence from experimental psychology, cognitive neuroscience, and machine learning approaches to demonstrate why and how self-association affects cognition and how it is modulated by various social experiences and situational factors
Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?
Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.
Computational models of spinal locomotor circuitry
To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.
Manipulating single-unit theta phase-locking with PhaSER: An open-source tool for real-time phase estimation and manipulation
Zoe has developed an open-source tool PhaSER, which allows her to perform real-time oscillatory phase estimation and apply optogenetic manipulations at precise phases of hippocampal theta during high-density electrophysiological recordings in head-fixed mice while they navigate a virtual environment. The precise timing of single-unit spiking relative to network-wide oscillations (i.e., phase locking) has long been thought to maintain excitatory-inhibitory homeostasis and coordinate cognitive processes, but due to intense experimental demands, the causal influence of this phenomenon has never been determined. Thus, we developed PhaSER (Phase-locked Stimulation to Endogenous Rhythms), a tool which allows the user to explore the temporal relationship between single-unit spiking and ongoing oscillatory activity.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Quasicriticality and the quest for a framework of neuronal dynamics
Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.
Epigenetic rewiring in Schinzel-Giedion syndrome
During life, a variety of specialized cells arise to grant the right and timely corrected functions of tissues and organs. Regulation of chromatin in defining specialized genomic regions (e.g. enhancers) plays a key role in developmental transitions from progenitors into cell lineages. These enhancers, properly topologically positioned in 3D space, ultimately guide the transcriptional programs. It is becoming clear that several pathologies converge in differential enhancer usage with respect to physiological situations. However, why some regulatory regions are physiologically preferred, while some others can emerge in certain conditions, including other fate decisions or diseases, remains obscure. Schinzel-Giedion syndrome (SGS) is a rare disease with symptoms such as severe developmental delay, congenital malformations, progressive brain atrophy, intractable seizures, and infantile death. SGS is caused by mutations in the SETBP1 gene that results in its accumulation further leading to the downstream accumulation of SET. The oncoprotein SET has been found as part of the histone chaperone complex INHAT that blocks the activity of histone acetyltransferases suggesting that SGS may (i) represent a natural model of alternative chromatin regulation and (ii) offer chances to study downstream (mal)adaptive mechanisms. I will present our work on the characterization of SGS in appropriate experimental models including iPSC-derived cultures and mouse.
Computational models and experimental methods for the human cornea
The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.
Beyond Volition
Voluntary actions are actions that agents choose to make. Volition is the set of cognitive processes that implement such choice and initiation. These processes are often held essential to modern societies, because they form the cognitive underpinning for concepts of individual autonomy and individual responsibility. Nevertheless, psychology and neuroscience have struggled to define volition, and have also struggled to study it scientifically. Laboratory experiments on volition, such as those of Libet, have been criticised, often rather naively, as focussing exclusively on meaningless actions, and ignoring the factors that make voluntary action important in the wider world. In this talk, I will first review these criticisms, and then look at extending scientific approaches to volition in three directions that may enrich scientific understanding of volition. First, volition becomes particularly important when the range of possible actions is large and unconstrained - yet most experimental paradigms involve minimal response spaces. We have developed a novel paradigm for eliciting de novo actions through verbal fluency, and used this to estimate the elusive conscious experience of generativity. Second, volition can be viewed as a mechanism for flexibility, by promoting adaptation of behavioural biases. This view departs from the tradition of defining volition by contrasting internally-generated actions with externally-triggered actions, and instead links volition to model-based reinforcement learning. By using the context of competitive games to re-operationalise the classic Libet experiment, we identified a form of adaptive autonomy that allows agents to reduce biases in their action choices. Interestingly, this mechanism seems not to require explicit understanding and strategic use of action selection rules, in contrast to classical ideas about the relation between volition and conscious, rational thought. Third, I will consider volition teleologically, as a mechanism for achieving counterfactual goals through complex problem-solving. This perspective gives a key role in mediating between understanding and planning on the one hand, and instrumental action on the other hand. Taken together, these three cognitive phenomena of generativity, flexibility, and teleology may partly explain why volition is such an important cognitive function for organisation of human behaviour and human flourishing. I will end by discussing how this enriched view of volition can relate to individual autonomy and responsibility.
Assigning credit through the "other” connectome
Learning in neural networks requires assigning the right values to thousands to trillions or more of individual connections, so that the network as a whole produces the desired behavior. Neuroscientists have gained insights into this “credit assignment” problem through decades of experimental, modeling, and theoretical studies. This has suggested key roles for synaptic eligibility traces and top-down feedback signals, among other factors. Here we study the potential contribution of another type of signaling that is being revealed in greater and greater fidelity by ongoing molecular and genomics studies. This is the set of modulatory pathways local to a given circuit, which form an intriguing second type of connectome overlayed on top of synaptic connectivity. We will share ongoing modeling and theoretical work that explores the possible roles of this local modulatory connectome in network learning.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
Dissociating learning-induced effects of meaning and familiarity in visual working memory for Chinese characters
Visual working memory (VWM) is limited in capacity, but memorizing meaningful objects may refine this limitation. However, meaningless and meaningful stimuli usually differ perceptually and an object’s association with meaning is typically already established before the actual experiment. We applied a strict control over these potential confounds by asking observers (N=45) to actively learn associations of (initially) meaningless objects. To this end, a change detection task presented Chinese characters, which were meaningless to our observers. Subsequently, half of the characters were consistently paired with pictures of animals. Then, the initial change detection task was repeated. The results revealed enhanced VWM performance after learning, in particular for meaning-associated characters (though not quite reaching the accuracy level attained by N=20 native Chinese observers). These results thus provide direct experimental evidence that the short-term retention of objects benefits from active learning of an object’s association with meaning in long-term memory.
Cognition in the Wild
What do nonhuman primates know about each other and their social environment, how do they allocate their attention, and what are the functional consequences of social decisions in natural settings? Addressing these questions is crucial to hone in on the co-evolution of cognition, social behaviour and communication, and ultimately the evolution of intelligence in the primate order. I will present results from field experimental and observational studies on free-ranging baboons, which tap into the cognitive abilities of these animals. Baboons are particularly valuable in this context as different species reveal substantial variation in social organization and degree of despotism. Field experiments revealed considerable variation in the allocation of social attention: while the competitive chacma baboons were highly sensitive to deviations from the social order, the highly tolerant Guinea baboons revealed a confirmation bias. This bias may be a result of the high gregariousness of the species, which puts a premium on ignoring social noise. Variation in despotism clearly impacted the use of signals to regulate social interactions. For instance, male-male interactions in chacma baboons mostly comprised dominance displays, while Guinea baboon males evolved elaborate greeting rituals that serve to confirm group membership and test social bonds. Strikingly, the structure of signal repertoires does not differ substantially between different baboon species. In conclusion, the motivational disposition to engage in affiliation or aggressiveness appears to be more malleable during evolution than structural elements of the behavioral repertoire; this insight is crucial for understanding the dynamics of social evolution.
Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach
The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).
Central place foraging: how insects anchor spatial information
Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.
COSYNE 2023
The COSYNE 2023 conference provided an inclusive forum for exchanging experimental and theoretical approaches to problems in systems neuroscience, continuing the tradition of bringing together the computational neuroscience community:contentReference[oaicite:5]{index=5}. The main meeting was held in Montreal followed by post-conference workshops in Mont-Tremblant, fostering intensive discussions and collaboration.
Silences, Spikes and Bursts: Three-Part Knot of the Neural Code
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labeling action potentials emitted at a particularly high frequency with a metonym – bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. In this talk, I will discuss the implications of seeing the neural code as having three syllables: silences, spikes and bursts. In particular, I will describe recent theoretical and experimental results that implicate bursting in the implementation of top-down attention and the coordination of learning.
A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures
The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
REM sleep and the energy allocation hypothesis”
Dynamics of cortical circuits: underlying mechanisms and computational implications
A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.
Meta-learning functional plasticity rules in neural networks
Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Investigating hippocampal synaptic plasticity in Schizophrenia: a computational and experimental approach using MEA recordings
Bernstein Conference 2024
Migraine mutation of a Na+ channel induces a switch in excitability type and energetically expensive spikes in an experimentally-constrained model of fast-spiking neurons
Bernstein Conference 2024
Reconciling Diverse Experimental Findings on Inhibitory Tuning in the Mouse Visual Cortex
Bernstein Conference 2024
A novel experimental framework for simultaneous measurement of excitatory and inhibitory conductances
COSYNE 2022
Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types
COSYNE 2022
Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types
COSYNE 2022
A novel experimental framework for simultaneous measurement of excitatory and inhibitory conductances
COSYNE 2022
Augmented Gaussian process variational autoencoders for multi-modal experimental data
COSYNE 2023
Experimental and computational evidence of learned synaptic dynamics to enhance temporal processing
COSYNE 2025
Astrocytic S100B protein in experimental autoimmune encephalomyelitis processes
FENS Forum 2024
Carbon monoxide as potent modulator of pain- and anxiety-related behavior in experimental chronic pelvic pain syndrome
FENS Forum 2024
Cellular and synaptic alterations of arkypallidal neurons in experimental parkinsonism and L-DOPA-induced dyskinesia
FENS Forum 2024
Circulating microRNAs and isomiRs as biomarkers for the initial insult and epileptogenesis in four experimental epilepsy models – The EPITARGET study
FENS Forum 2024
The effect of biocellulose graft and vascular endothelial growth factor on angiogenesis in experimental sciatic nerve injury
FENS Forum 2024
Effects of dietary supplementation with deuterated polyunsaturated fatty acids in experimental traumatic brain injury
FENS Forum 2024
Effects of an online intervention based on pain neuroscience education for pregnant women with lumbar pain on pain, disability, and kinesiophobia: A quasi-experimental pilot study
FENS Forum 2024
Establishing an experimental sgRNA expression screening assay for CRISPR activation in vitro
FENS Forum 2024
Estimation of neuronal biophysical parameters in the presence of experimental noise using computer simulations and probabilistic inference methods
FENS Forum 2024
Experimental model for strain-induced mechanical neurostimulation on human progenitor neurons
FENS Forum 2024
Exploring social hierarchies: An experimental study using RFID technology in mice
FENS Forum 2024
Growth hormone assay and histological changes in the pituitary gland of experimentally induced juvenile hydrocephalic rats
FENS Forum 2024
Inhibiting DJ-1 oxidation reduces neurofunctional deficits after experimental intracerebral haemorrhage
FENS Forum 2024
Interleukin-9 protects from microglia- and TNF-mediated synaptotoxicity in experimental multiple sclerosis
FENS Forum 2024
Longitudinal single-cell and brain transcriptomic characterization of microglia signatures during experimental demyelination and remyelination
FENS Forum 2024
Morphological and molecular characterization of cortical protoplasmic astrocytes during early experimental demyelination
FENS Forum 2024
Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling
FENS Forum 2024
Role of complement in regulating glutamate transmission in an experimental model of multiple sclerosis
FENS Forum 2024
Ruminative stress: How repetitive thoughts and stress interact in an experimental setting
FENS Forum 2024
Self-degradation of memory in Alzheimer’s disease: Experimental testing of the hypothesis and search for methods of neuroprotection
FENS Forum 2024
Sex-based differences in a mouse model of experimental colitis housed in environmental enrichment
FENS Forum 2024
Spreading depolarization disrupts neurovascular coupling after experimental acute ischemic stroke
FENS Forum 2024
Statistics versus animal welfare: Validation of the experimental unit in the focus of 3R
FENS Forum 2024
Temporal analysis of the infiltration dynamics of pro-inflammatory cytokine-producing innate and adaptive immune cells following experimental traumatic brain injury in mice
FENS Forum 2024
Transcriptional changes in the prefrontal cortex are associated with cognitive impairment in an experimental mouse model of multiple sclerosis
FENS Forum 2024
Unveiling cortical microvascular dysfunction and neurodegeneration mechanisms in experimental autoimmune encephalomyelitis
FENS Forum 2024