← Back

E Prop

Topic spotlight
TopicWorld Wide

E Prop

Discover seminars, jobs, and research tagged with E Prop across World Wide.
70 curated items60 Seminars10 ePosters
Updated 2 months ago
70 items · E Prop
70 results
SeminarNeuroscience

Cellular Crosstalk in Brain Development, Evolution and Disease

Silvia Cappello
Molecular Physiology of Neurogenesis at the Ludwig Maximilian University of Munich
Oct 1, 2025

Cellular crosstalk is an essential process during brain development and is influenced by numerous factors, including cell morphology, adhesion, the local extracellular matrix and secreted vesicles. Inspired by mutations associated with neurodevelopmental disorders, we focus on understanding the role of extracellular mechanisms essential for the proper development of the human brain. Therefore, we combine 2D and 3D in vitro human models to better understand the molecular and cellular mechanisms involved in progenitor proliferation and fate, migration and maturation of excitatory and inhibitory neurons during human brain development and tackle the causes of neurodevelopmental disorders.

SeminarOpen Source

The SIMple microscope: Development of a fibre-based platform for accessible SIM imaging in unconventional environments

Rebecca McClelland
PhD student at the University of Cambridge, United Kingdom.
Aug 25, 2025

Advancements in imaging speed, depth and resolution have made structured illumination microscopy (SIM) an increasingly powerful optical sectioning (OS) and super-resolution (SR) technique, but these developments remain inaccessible to many life science researchers due to the cost, optical complexity and delicacy of these instruments. We address these limitations by redesigning the optical path using in-line fibre components that are compact, lightweight and easily assembled in a “Plug & Play” modality, without compromising imaging performance. They can be integrated into an existing widefield microscope with a minimum of optical components and alignment, making OS-SIM more accessible to researchers with less optics experience. We also demonstrate a complete SR-SIM imaging system with dimensions 300 mm × 300 mm × 450 mm. We propose to enable accessible SIM imaging by utilising its compact, lightweight and robust design to transport it where it is needed, and image in “unconventional” environments where factors such as temperature and biosafety considerations currently limit imaging experiments.

SeminarPsychology

Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake

Casey Becker
University of Pittsburgh
Apr 15, 2025

Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.

SeminarNeuroscience

Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades

Andrej Bicanski
Max Planck Institute for Human Cognitive and Brain Sciences
Mar 11, 2025

How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.

SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 11, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscience

Brain circuits for spatial navigation

Ann Hermundstad, Ila Fiete, Barbara Webb
Janelia Research Campus; MIT; University of Edinburgh
Nov 28, 2024

In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.

SeminarNeuroscience

Brain-Wide Compositionality and Learning Dynamics in Biological Agents

Kanaka Rajan
Harvard Medical School
Nov 12, 2024

Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.

SeminarNeuroscience

Feedback-induced dispositional changes in risk preferences

Stefano Palmintieri
Institut National de la Santé et de la Recherche Médicale & École Normale Supérieure, Paris
Oct 28, 2024

Contrary to the original normative decision-making standpoint, empirical studies have repeatedly reported that risk preferences are affected by the disclosure of choice outcomes (feedback). Although no consensus has yet emerged regarding the properties and mechanisms of this effect, a widespread and intuitive hypothesis is that repeated feedback affects risk preferences by means of a learning effect, which alters the representation of subjective probabilities. Here, we ran a series of seven experiments (N= 538), tailored to decipher the effects of feedback on risk preferences. Our results indicate that the presence of feedback consistently increases risk-taking, even when the risky option is economically less advantageous. Crucially, risk-taking increases just after the instructions, before participants experience any feedback. These results challenge the learning account, and advocate for a dispositional effect, induced by the mere anticipation of feedback information. Epistemic curiosity and regret avoidance may drive this effect in partial and complete feedback conditions, respectively.

SeminarNeuroscience

Neural mechanisms governing the learning and execution of avoidance behavior

Mario Penzo
National Institute of Mental Health, Bethesda, USA
Jun 18, 2024

The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.

SeminarNeuroscienceRecording

Blood-brain barrier dysfunction in epilepsy: Time for translation

Alon Friedman
Dalhousie University
Feb 27, 2024

The neurovascular unit (NVU) consists of cerebral blood vessels, neurons, astrocytes, microglia, and pericytes. It plays a vital role in regulating blood flow and ensuring the proper functioning of neural circuits. Among other, this is made possible by the blood-brain barrier (BBB), which acts as both a physical and functional barrier. Previous studies have shown that dysfunction of the BBB is common in most neurological disorders and is associated with neural dysfunction. Our studies have demonstrated that BBB dysfunction results in the transformation of astrocytes through transforming growth factor beta (TGFβ) signaling. This leads to activation of the innate neuroinflammatory system, changes in the extracellular matrix, and pathological plasticity. These changes ultimately result in dysfunction of the cortical circuit, lower seizure threshold, and spontaneous seizures. Blocking TGFβ signaling and its associated pro-inflammatory pathway can prevent this cascade of events, reduces neuroinflammation, repairs BBB dysfunction, and prevents post-injury epilepsy, as shown in experimental rodents. To further understand and assess BBB integrity in human epilepsy, we developed a novel imaging technique that quantitatively measures BBB permeability. Our findings have confirmed that BBB dysfunction is common in patients with drug-resistant epilepsy and can assist in identifying the ictal-onset zone prior to surgery. Current clinical studies are ongoing to explore the potential of targeting BBB dysfunction as a novel treatment approach and investigate its role in drug resistance, the spread of seizures, and comorbidities associated with epilepsy.

SeminarPsychology

10 “simple rules” for socially responsible science

Alon Zivony
University of Sheffield
Dec 10, 2023

Guidelines concerning the potentially harmful effects of scientific studies have historically focused on minimizing risk for participants. However, studies can also indirectly inflict harm on individuals and social groups through how they are designed, reported, and disseminated. As evidenced by recent criticisms and retractions of high-profile studies dealing with a wide variety of social issues, there is a scarcity of resources and guidance on how one can conduct research in a socially responsible manner. As such, even motivated researchers might publish work that has negative social impacts due to a lack of awareness. To address this, we proposed 10 recommendations (“simple rules”) for researchers who wish to conduct more socially responsible science. These recommendations cover major considerations throughout the life cycle of a study from inception to dissemination. They are not aimed to be a prescriptive list or a deterministic code of conduct. Rather, they are meant to help motivated scientists to reflect on their social responsibility as researchers and actively engage with the potential social impact of their research.

SeminarNeuroscience

Piecing together the puzzle of emotional consciousness

Tahnée Engelen
Ecole Normale Supérieure
Dec 7, 2023

Conscious emotional experiences are very rich in their nature, and can encompass anything ranging from the most intense panic when facing immediate threat, to the overwhelming love felt when meeting your newborn. It is then no surprise that capturing all aspects of emotional consciousness, such as intensity, valence, and bodily responses, into one theory has become the topic of much debate. Key questions in the field concern how we can actually measure emotions and which type of experiments can help us distill the neural correlates of emotional consciousness. In this talk I will give a brief overview of theories of emotional consciousness and where they disagree, after which I will dive into the evidence proposed to support these theories. Along the way I will discuss to what extent studying emotional consciousness is ‘special’ and will suggest several tools and experimental contrasts we have at our disposal to further our understanding on this intriguing topic.

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Paul Scotti
Dec 6, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

SeminarNeuroscienceRecording

Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing

Pulin Gong
The University of Sydney
Aug 10, 2023

The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.

SeminarArtificial IntelligenceRecording

Computational and mathematical approaches to myopigenesis

C. Ross Ethier
Georgia Institute of Technology and Emory University
Jul 31, 2023

Myopia is predicted to affect 50% of all people worldwide by 2050, and is a risk factor for significant, potentially blinding ocular pathologies, such as retinal detachment and glaucoma. Thus, there is significant motivation to better understand the process of myopigenesis and to develop effective anti-myopigenic treatments. In nearly all cases of human myopia, scleral remodeling is an obligate step in the axial elongation that characterizes the condition. Here I will describe the development of a biomechanical assay based on transient unconfined compression of scleral samples. By treating the scleral as a poroelastic material, one can determine scleral biomechanical properties from extremely small samples, such as obtained from the mouse eye. These properties provide proxy measures of scleral remodeling, and have allowed us to identify all-trans retinoic acid (atRA) as a myopigenic stimulus in mice. I will also describe nascent collaborative work on modeling the transport of atRA in the eye.

SeminarNeuroscience

Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time

Harel Shouval
The University of Texas at Houston
Jun 13, 2023

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

SeminarNeuroscienceRecording

The Effects of Movement Parameters on Time Perception

Keri Anne Gladhill
Florida State University, Tallahassee, Florida.
May 30, 2023

Mobile organisms must be capable of deciding both where and when to move in order to keep up with a changing environment; therefore, a strong sense of time is necessary, otherwise, we would fail in many of our movement goals. Despite this intrinsic link between movement and timing, only recently has research begun to investigate the interaction. Two primary effects that have been observed include: movements biasing time estimates (i.e., affecting accuracy) as well as making time estimates more precise. The goal of this presentation is to review this literature, discuss a Bayesian cue combination framework to explain these effects, and discuss the experiments I have conducted to test the framework. The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement

SeminarNeuroscienceRecording

Internal representation of musical rhythm: transformation from sound to periodic beat

Tomas Lenc
Institute of Neuroscience, UCLouvain, Belgium
May 30, 2023

When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement

SeminarNeuroscience

Richly structured reward predictions in dopaminergic learning circuits

Angela J. Langdon
National Institute of Mental Health at National Institutes of Health (NIH)
May 16, 2023

Theories from reinforcement learning have been highly influential for interpreting neural activity in the biological circuits critical for animal and human learning. Central among these is the identification of phasic activity in dopamine neurons as a reward prediction error signal that drives learning in basal ganglia and prefrontal circuits. However, recent findings suggest that dopaminergic prediction error signals have access to complex, structured reward predictions and are sensitive to more properties of outcomes than learning theories with simple scalar value predictions might suggest. Here, I will present recent work in which we probed the identity-specific structure of reward prediction errors in an odor-guided choice task and found evidence for multiple predictive “threads” that segregate reward predictions, and reward prediction errors, according to the specific sensory features of anticipated outcomes. Our results point to an expanded class of neural reinforcement learning algorithms in which biological agents learn rich associative structure from their environment and leverage it to build reward predictions that include information about the specific, and perhaps idiosyncratic, features of available outcomes, using these to guide behavior in even quite simple reward learning tasks.

SeminarNeuroscience

The balanced brain: two-photon microscopy of inhibitory synapse formation

Corette Wierenga
Donders Institute
May 10, 2023

Coordination between excitatory and inhibitory synapses (providing positive and negative signals respectively) is required to ensure proper information processing in the brain. Many brain disorders, especially neurodevelopental disorders, are rooted in a specific disturbance of this coordination. In my research group we use a combination of two-photon microscopy and electrophisiology to examine how inhibitory synapses are fromed and how this formation is coordinated with nearby excitatroy synapses.

SeminarNeuroscience

Euclidean coordinates are the wrong prior for primate vision

Gary Cottrell
University of California, San Diego (UCSD)
May 9, 2023

The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 21, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscience

Dissociation between superior colliculus visual response properties and short- latency ocular position drift responses

Tatiana Malevich and Fatemeh Khademi
Mar 10, 2023
SeminarNeuroscienceRecording

PIEZO2 in somatosensory neurons coordinates gastrointestinal transit

Rocio Servin-Vences
The Scripps Research Institute
Feb 28, 2023

The transit of food through the gastrointestinal tract is critical for nutrient absorption and survival, and the gastrointestinal tract has the ability to initiate motility reflexes triggered by luminal distention. This complex function depends on the crosstalk between extrinsic and intrinsic neuronal innervation within the intestine, as well as local specialized enteroendocrine cells. However, the molecular mechanisms and the subset of sensory neurons underlying the initiation and regulation of intestinal motility remain largely unknown. Here, we show that humans lacking PIEZO2 exhibit impaired bowel sensation and motility. Piezo2 in mouse dorsal root but not nodose ganglia is required to sense gut content, and this activity slows down food transit rates in the stomach, small intestine, and colon. Indeed, Piezo2 is directly required to detect colon distension in vivo. Our study unveils the mechanosensory mechanisms that regulate the transit of luminal contents throughout the gut, which is a critical process to ensure proper digestion, nutrient absorption, and waste removal. These findings set the foundation of future work to identify the highly regulated interactions between sensory neurons, enteric neurons and non- neuronal cells that control gastrointestinal motility.

SeminarPsychology

A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures

Julien Audiffren
University of Fribourg
Feb 28, 2023

The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.

SeminarArtificial IntelligenceRecording

Unique features of oxygen delivery to the mammalian retina

Robert Linsenmeier
Northwestern University
Feb 6, 2023

Like all neural tissue, the retina has a high metabolic demand, and requires a constant supply of oxygen. Second and third order neurons are supplied by the retinal circulation, whose characteristics are similar to brain circulation. However, the photoreceptor region, which occupies half of the retinal thickness, is avascular, and relies on diffusion of oxygen from the choroidal circulation, whose properties are very different, as well as the retinal circulation. By fitting diffusion models to oxygen measurements made with oxygen microelectrodes, it is possible to understand the relative roles of the two circulations under normal conditions of light and darkness, and what happens if the retina is detached or the retinal circulation is occluded. Most of this work has been done in vivo in rat, cat, and monkey, but recent work in the isolated mouse retina will also be discussed.

SeminarNeuroscienceRecording

Geometry of concept learning

Haim Sompolinsky
The Hebrew University of Jerusalem and Harvard University
Jan 3, 2023

Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.

SeminarNeuroscienceRecording

Flexible selection of task-relevant features through population gating

Joao Barbosa
Ostojic lab, Ecole Normale Superieure
Dec 6, 2022

Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within A1 but controlled by top-down inputs from mPFC, can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.

SeminarNeuroscienceRecording

Neural networks in the replica-mean field limits

Thibaud Taillefumier
The University of Texas at Austin
Nov 29, 2022

In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 16, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscience

It’s All About Motion: Functional organization of the multisensory motion system at 7T

Anna Gaglianese
Laboratory for Investigative Neurophysiology, CHUV, Lausanne & The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
Nov 14, 2022

The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will suggest that this mechanism may play a role in evaluating multisensory responses for visual, tactile and auditory stimuli in hMT+.

SeminarNeuroscienceRecording

Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity

Thomas Limbacher
TU Graz
Nov 8, 2022

Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

SeminarNeuroscience

Development of Interictal Networks: Implications for Epilepsy Progression and Cognition

Jennifer Gelinas
Columbia University Medical Center, NY
Nov 1, 2022

Epilepsy is a common and disabling neurologic condition affecting adults and children that results from complex dysfunction of neural networks and is ineffectively treated with current therapies in up to one third of patients. This dysfunction can have especially severe consequences in pediatric age group, where neurodevelopment may be irreversibly affected. Furthermore, although seizures are the most obvious manifestation of epilepsy, the cognitive and psychiatric dysfunction that often coexists in patients with this disorder has the potential to be equally disabling.  Given these challenges, her research program aims to better understand how epileptic activity disrupts the proper development and function of neural networks, with the overall goal of identifying novel biomarkers and systems level treatments for epileptic disorders and their comorbidities, especially those affecting children.

SeminarNeuroscienceRecording

Trial by trial predictions of subjective time from human brain activity

Maxine Sherman
University of Sussex, UK
Oct 25, 2022

Our perception of time isn’t like a clock; it varies depending on other aspects of experience, such as what we see and hear in that moment. However, in everyday life, the properties of these simple features can change frequently, presenting a challenge to understanding real-world time perception based on simple lab experiments. We developed a computational model of human time perception based on tracking changes in neural activity across brain regions involved in sensory processing, using fMRI. By measuring changes in brain activity patterns across these regions, our approach accommodates the different and changing feature combinations present in natural scenarios, such as walking on a busy street. Our model reproduces people’s duration reports for natural videos (up to almost half a minute long) and, most importantly, predicts whether a person reports a scene as relatively shorter or longer–the biases in time perception that reflect how natural experience of time deviates from clock time

SeminarNeuroscienceRecording

From Machine Learning to Autonomous Intelligence

Yann Le Cun
Meta-FAIR & Meta AI
Oct 18, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.

SeminarNeuroscienceRecording

How People Form Beliefs

Tali Sharot
University College London
Oct 13, 2022

In this talk I will present our recent behavioural and neuroscience research on how the brain motivates itself to form particular beliefs and why it does so. I will propose that the utility of a belief is derived from the potential outcomes associated with holding it. Outcomes can be internal (e.g., positive/negative feelings) or external (e.g., material gain/loss), and only some are dependent on belief accuracy. We show that belief change occurs when the potential outcomes of holding it alters, for example when moving from a safe environment to a threatening environment. Our findings yield predictions about how belief formation alters as a function of mental health. We test these predictions using a linguistic analysis of participants’ web searches ‘in the wild’ to quantify the affective properties of information they consume and relate those to reported psychiatric symptoms. Finally, I will present a study in which we used our framework to alter the incentive structure of social media platforms to reduce the spread of misinformation and improve belief accuracy.

SeminarNeuroscienceRecording

Learning Relational Rules from Rewards

Guillermo Puebla
University of Bristol
Oct 12, 2022

Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we propose that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the relationship between our model with models of relational and analogical reasoning, as well as its limitations and future directions of research.

SeminarPsychology

Social Curiosity

Ildikó Király
Eötvös Loránd University
Oct 12, 2022

In this lecture, I would like to share with the broad audience the empirical results gathered and the theoretical advancements made in the framework of the Lendület project entitled ’The cognitive basis of human sociality’. The main objective of this project was to understand the mechanisms that enable the unique sociality of humans, from the angle of cognitive science. In my talk,  I will focus on recent empirical evidence in the study of three fundamental social cognitive functions (social categorization, theory of mind and social learning; mainly from the empirical lenses of developmental psychology) in order to outline a theory that emphasizes the need to consider their interconnectedness. The proposal is that the ability to represent the social world along categories and the capacity to read others’ minds are used in an integrated way to efficiently assess the epistemic states of fellow humans by creating a shared representational space. The emergence of this shared representational space is both the result of and a prerequisite to efficient learning about the physical and social environment.

SeminarNeuroscienceRecording

Lateral entorhinal cortex directly influences medial entorhinal cortex through synaptic connections in layer 1

Brianna Vandrey
University of Edinburgh
Oct 11, 2022

Standard models of episodic memory suggest that lateral (LEC) and medial entorhinal cortex (MEC) send independent inputs to the hippocampus, each carrying different types of information. Here, we describe a pathway by which information is integrated between LEC and MEC prior to reaching hippocampus. We demonstrate that LEC sends strong projections to MEC arising from neurons that receive neocortical inputs. Activation of LEC inputs drives excitation of hippocampal-projecting neurons in MEC layer 2, typically followed by inhibition that is accounted for by parallel activation of local inhibitory neurons. We therefore propose that local circuits in MEC may support integration of ‘what’ and ‘where’ information.

SeminarNeuroscience

From Machine Learning to Autonomous Intelligence

Yann LeCun
Meta Fair
Oct 9, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf

SeminarNeuroscience

Setting network states via the dynamics of action potential generation

Susanne Schreiber
Humboldt University Berlin, Germany
Oct 4, 2022

To understand neural computation and the dynamics in the brain, we usually focus on the connectivity among neurons. In contrast, the properties of single neurons are often thought to be negligible, at least as far as the activity of networks is concerned. In this talk, I will contradict this notion and demonstrate how the biophysics of action-potential generation can have a decisive impact on network behaviour. Our recent theoretical work shows that, among regularly firing neurons, the somewhat unattended homoclinic type (characterized by a spike onset via a saddle homoclinic orbit bifurcation) particularly stands out: First, spikes of this type foster specific network states - synchronization in inhibitory and splayed-out/frustrated states in excitatory networks. Second, homoclinic spikes can easily be induced by changes in a variety of physiological parameters (like temperature, extracellular potassium, or dendritic morphology). As a consequence, such parameter changes can even induce switches in network states, solely based on a modification of cellular voltage dynamics. I will provide first experimental evidence and discuss functional consequences of homoclinic spikes for the design of efficient pattern-generating motor circuits in insects as well as for mammalian pathologies like febrile seizures. Our analysis predicts an interesting role for homoclinic action potentials as an integral part of brain dynamics in both health and disease.

SeminarNeuroscienceRecording

Building System Models of Brain-Like Visual Intelligence with Brain-Score

Martin Schrimpf
MIT
Oct 4, 2022

Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.

SeminarNeuroscience

Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties

SueYeon Chung
NYU/Flatiron
Sep 15, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

SeminarNeuroscienceRecording

Introducing dendritic computations to SNNs with Dendrify

Michalis Pagkalos
IMBB FORTH
Sep 6, 2022

Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 4, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarPhysics of LifeRecording

Magnetic Handshake Materials

Chrisy Xiyu Du
Harvard University
Jul 31, 2022

Biological materials gain complexity from the programmable nature of their components. To manufacture materials with comparable complexity synthetically, we need to create building blocks with low crosstalk so that they only bind to their desired partners. Canonically, these building blocks are made using DNA strands or proteins to achieve specificity. Here we propose a new materials platform, termed Magnetic Handshake Materials, in which we program interactions through designing magnetic dipole patterns. This is a completely synthetic platform, enabled by magnetic printing technology, which is easier to both model theoretically and control experimentally. In this seminar, I will give an overview of the development of the Magnetic Handshake Materials platform, ranging from interaction, assembly to function design.

SeminarNeuroscienceRecording

A Game Theoretical Framework for Quantifying​ Causes in Neural Networks

Kayson Fakhar​
ICNS Hamburg
Jul 5, 2022

Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.

SeminarNeuroscienceRecording

Gene-free landscape models for development

Meritxell Sáez
Briscoe lab, Francis Crick Institute; IQS Barcelona
Jun 28, 2022

Fate decisions in developing tissues involve cells transitioning between a set of discrete cell states. Geometric models, often referred to as Waddington landscapes, are an appealing way to describe differentiation dynamics and developmental decisions. We consider the differentiation of neural and mesodermal cells from pluripotent mouse embryonic stem cells exposed to different combinations and durations of signalling factors. We developed a principled statistical approach using flow cytometry data to quantify differentiating cell states. Then, using a framework based on Catastrophe Theory and approximate Bayesian computation, we constructed the corresponding dynamical landscape. The result was a quantitative model that accurately predicted the proportions of neural and mesodermal cells differentiating in response to specific signalling regimes. Taken together, the approach we describe is broadly applicable for the quantitative analysis of differentiation dynamics and for determining the logic of developmental cell fate decisions.

SeminarNeuroscienceRecording

Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation

Raoul-Martin Memmesheimer
University of Bonn, Germany
Jun 28, 2022

Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.

SeminarNeuroscienceRecording

Efficient Random Codes in a Shallow Neural Network

Rava Azeredo da Silveira
French National Centre for Scientific Research (CNRS), Paris
Jun 14, 2022

Efficient coding has served as a guiding principle in understanding the neural code. To date, however, it has been explored mainly in the context of peripheral sensory cells with simple tuning curves. By contrast, ‘deeper’ neurons such as grid cells come with more complex tuning properties which imply a different, yet highly efficient, strategy for representing information. I will show that a highly efficient code is not specific to a population of neurons with finely tuned response properties: it emerges robustly in a shallow network with random synapses. Here, the geometry of population responses implies that optimality obtains from a tradeoff between two qualitatively different types of error: ‘local’ errors (common to classical neural population codes) and ‘global’ (or ‘catastrophic’) errors. This tradeoff leads to efficient compression of information from a high-dimensional representation to a low-dimensional one. After describing the theoretical framework, I will use it to re-interpret recordings of motor cortex in behaving monkey. Our framework addresses the encoding of (sensory) information; if time allows, I will comment on ongoing work that focuses on decoding from the perspective of efficient coding.

SeminarNeuroscienceRecording

Canonical neural networks perform active inference

Takuya Isomura
RIKEN CBS
Jun 9, 2022

The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.

SeminarNeuroscience

Malignant synaptic plasticity in pediatric high-grade gliomas

Kathryn Taylor
Stanford
May 24, 2022

Pediatric high-grade gliomas (pHGG) are a devastating group of diseases that urgently require novel therapeutic options. We have previously demonstrated that pHGGs directly synapse onto neurons and the subsequent tumor cell depolarization, mediated by calcium-permeable AMPA channels, promotes their proliferation. The regulatory mechanisms governing these postsynaptic connections are unknown. Here, we investigated the role of BDNF-TrkB signaling in modulating the plasticity of the malignant synapse. BDNF ligand activation of its canonical receptor, TrkB (which is encoded for by the gene NTRK2), has been shown to be one important modulator of synaptic regulation in the normal setting. Electrophysiological recordings of glioma cell membrane properties, in response to acute neurotransmitter stimulation, demonstrate in an inward current resembling AMPA receptor (AMPAR) mediated excitatory neurotransmission. Extracellular BDNF increases the amplitude of this glutamate-induced tumor cell depolarization and this effect is abrogated in NTRK2 knockout glioma cells. Upon examining tumor cell excitability using in situ calcium imaging, we found that BDNF increases the intensity of glutamate-evoked calcium transients in GCaMP6s expressing glioma cells. Western blot analysis indicates the tumors AMPAR properties are altered downstream of BDNF induced TrkB activation in glioma. Cell membrane protein capture (via biotinylation) and live imaging of pH sensitive GFP-tagged AMPAR subunits demonstrate an increase of calcium permeable channels at the tumors postsynaptic membrane in response to BDNF. We find that BDNF-TrkB signaling promotes neuron-to-glioma synaptogenesis as measured by high-resolution confocal and electron microscopy in culture and tumor xenografts. Our analysis of published pHGG transcriptomic datasets, together with brain slice conditioned medium experiments in culture, indicates the tumor microenvironment as the chief source of BDNF ligand. Disruption of the BDNF-TrkB pathway in patient-derived orthotopic glioma xenograft models, both genetically and pharmacologically, results in an increased overall survival and reduced tumor proliferation rate. These findings suggest that gliomas leverage normal mechanisms of plasticity to modulate the excitatory channels involved in synaptic neurotransmission and they reveal the potential to target the regulatory components of glioma circuit dynamics as a therapeutic strategy for these lethal cancers.

SeminarNeuroscienceRecording

Learning in/about/from the basal ganglia

Jonathan Rubin
University of Pittsburgh
May 24, 2022

The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning. In this talk, I will discuss a framework for how this function might be performed and computational results using an upward mapping to identify putative low-dimensional control ensembles that may be involved in tuning decision policy. I will also present some recent experimental results and theory – related to effects of extracellular ion dynamics -- that run counter to the classical view of basal ganglia pathways and suggest a new interpretation of certain aspects of this framework. For those not so interested in the basal ganglia, I hope that the upward mapping approach and impact of extracellular ion dynamics will nonetheless be of interest!

SeminarNeuroscienceRecording

A draft connectome for ganglion cell types of the mouse retina

David Berson
Brown University
May 15, 2022

The visual system of the brain is highly parallel in its architecture. This is clearly evident in the outputs of the retina, which arise from neurons called ganglion cells. Work in our lab has shown that mammalian retinas contain more than a dozen distinct types of ganglion cells. Each type appears to filter the retinal image in a unique way and to relay this processed signal to a specific set of targets in the brain. My students and I are working to understand the meaning of this parallel organization through electrophysiological and anatomical studies. We record from light-responsive ganglion cells in vitro using the whole-cell patch method. This allows us to correlate directly the visual response properties, intrinsic electrical behavior, synaptic pharmacology, dendritic morphology and axonal projections of single neurons. Other methods used in the lab include neuroanatomical tracing techniques, single-unit recording and immunohistochemistry. We seek to specify the total number of ganglion cell types, the distinguishing characteristics of each type, and the intraretinal mechanisms (structural, electrical, and synaptic) that shape their stimulus selectivities. Recent work in the lab has identified a bizarre new ganglion cell type that is also a photoreceptor, capable of responding to light even when it is synaptically uncoupled from conventional (rod and cone) photoreceptors. These ganglion cells appear to play a key role in resetting the biological clock. It is just this sort of link, between a specific cell type and a well-defined behavioral or perceptual function, that we seek to establish for the full range of ganglion cell types. My research concerns the structural and functional organization of retinal ganglion cells, the output cells of the retina whose axons make up the optic nerve. Ganglion cells exhibit great diversity both in their morphology and in their responses to light stimuli. On this basis, they are divisible into a large number of types (>15). Each ganglion-cell type appears to send its outputs to a specific set of central visual nuclei. This suggests that ganglion cell heterogeneity has evolved to provide each visual center in the brain with pre-processed representations of the visual scene tailored to its specific functional requirements. Though the outline of this story has been appreciated for some time, it has received little systematic exploration. My laboratory is addressing in parallel three sets of related questions: 1) How many types of ganglion cells are there in a typical mammalian retina and what are their structural and functional characteristics? 2) What combination of synaptic networks and intrinsic membrane properties are responsible for the characteristic light responses of individual types? 3) What do the functional specializations of individual classes contribute to perceptual function or to visually mediated behavior? To pursue these questions, we label retinal ganglion cells by retrograde transport from the brain; analyze in vitro their light responses, intrinsic membrane properties and synaptic pharmacology using the whole-cell patch clamp method; and reveal their morphology with intracellular dyes. Recently, we have discovered a novel ganglion cell in rat retina that is intrinsically photosensitive. These ganglion cells exhibit robust light responses even when all influences from classical photoreceptors (rods and cones) are blocked, either by applying pharmacological agents or by dissociating the ganglion cell from the retina. These photosensitive ganglion cells seem likely to serve as photoreceptors for the photic synchronization of circadian rhythms, the mechanism that allows us to overcome jet lag. They project to the circadian pacemaker of the brain, the suprachiasmatic nucleus of the hypothalamus. Their temporal kinetics, threshold, dynamic range, and spectral tuning all match known properties of the synchronization or "entrainment" mechanism. These photosensitive ganglion cells innervate various other brain targets, such as the midbrain pupillary control center, and apparently contribute to a host of behavioral responses to ambient lighting conditions. These findings help to explain why circadian and pupillary light responses persist in mammals, including humans, with profound disruption of rod and cone function. Ongoing experiments are designed to elucidate the phototransduction mechanism, including the identity of the photopigment and the nature of downstream signaling pathways. In other studies, we seek to provide a more detailed characterization of the photic responsiveness and both morphological and functional evidence concerning possible interactions with conventional rod- and cone-driven retinal circuits. These studies are of potential value in understanding and designing appropriate therapies for jet lag, the negative consequences of shift work, and seasonal affective disorder.

SeminarPsychology

ItsAllAboutMotion: Encoding of speed in the human Middle Temporal cortex

Anna Gaglianese
Centre Hospitalier Universitaire Vaudois, University of Lausanne
May 3, 2022

The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will show that this mechanism plays a role in evaluating multisensory responses for visual, tactile and auditory motion stimuli in hMT+.

SeminarNeuroscienceRecording

Optimization at the Single Neuron Level:​ Prediction of Spike Sequences and Emergence of Synaptic Plasticity Mechanisms

Matteo Saponati
Ernst-Strüngmann Institute for Neuroscience
May 3, 2022

Intelligent behavior depends on the brain’s ability to anticipate future events. However, the learning rules that enable neurons to predict and fire ahead of sensory inputs remain largely unknown. We propose a plasticity rule based on pre-dictive processing, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential. Neurons thereby amplify those synapses that maximally predict other synaptic inputs based on their temporal relations, which provide a solution to an optimization problem that can be implemented at the single-neuron level using only local information. Consequently, neurons learn sequences over long timescales and shift their spikes towards the first inputs in a sequence. We show that this mechanism can explain the development of anticipatory motion signaling and recall in the visual system. Furthermore, we demonstrate that the learning rule gives rise to several experimentally observed STDP (spike-timing-dependent plasticity) mechanisms. These findings suggest prediction as a guiding principle to orchestrate learning and synaptic plasticity in single neurons.

SeminarNeuroscienceRecording

A transcriptomic axis predicts state modulation of cortical interneurons

Stephane Bugeon
Harris & Carandini's lab, UCL
Apr 26, 2022

Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes, but it is not known whether these subtypes have correspondingly diverse activity patterns in the living brain. We show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, but that this diversity is organized by a single factor: position along their main axis of transcriptomic variation. We combined in vivo 2-photon calcium imaging of mouse V1 with a novel transcriptomic method to identify mRNAs for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1-3 into a three-level hierarchy of 5 Subclasses, 11 Types, and 35 Subtypes using previously-defined transcriptomic clusters. Responses to visual stimuli differed significantly only across Subclasses, suppressing cells in the Sncg Subclass while driving cells in the other Subclasses. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory Subtypes that fired more in resting, oscillatory brain states have less axon in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro and express more inhibitory cholinergic receptors. Subtypes firing more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 Subtypes shape state-dependent cortical processing.

SeminarPhysics of Life

Emergence of homochirality in large molecular systems

David Lacoste
ESPCI
Apr 21, 2022

The question of the origin of homochirality of living matter, or the dominance of one handedness for all molecules of life across the entire biosphere, is a long-standing puzzle in the research on the Origin of Life. In the fifties, Frank proposed a mechanism to explain homochirality based on the properties of a simple autocatalytic network containing only a few chemical species. Following this work, chemists struggled to find experimental realizations of this model, possibly due to a lack of proper methods to identify autocatalysis [1]. In any case, a model based on a few chemical species seems rather limited, because prebiotic earth is likely to have consisted of complex ‘soups’ of chemicals. To include this aspect of the problem, we recently proposed a mechanism based on certain features of large out-of-equilibrium chemical networks [2]. We showed that a phase transition towards an homochiral state is likely to occur as the number of chiral species in the system becomes large or as the amount of free energy injected into the system increases. Through an analysis of large chemical databases, we showed that there is no need for very large molecules for chiral species to dominate over achiral ones; it already happens when molecules contain about 10 heavy atoms. We also analyzed the various conventions used to measure chirality and discussed the relative chiral signs adopted by different groups of molecules [3]. We then proposed a generalization of Frank’s model for large chemical networks, which we characterized using random matrix theory. This analysis includes sparse networks, suggesting that the emergence of homochirality is a robust and generic transition. References: [1] A. Blokhuis, D. Lacoste, and P. Nghe, PNAS (2020), 117, 25230. [2] G. Laurent, D. Lacoste, and P. Gaspard, PNAS (2021) 118 (3) e2012741118. [3] G. Laurent, D. Lacoste, and P. Gaspard, Proc. R. Soc. A 478:20210590 (2022).

SeminarNeuroscienceRecording

Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems

Horacio Rotstein
New Jersey Institute of Technology
Apr 12, 2022

Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.

ePoster

Assessment of neurorestorative properties of intranasally administered colostrum-derived exosomes in the periventricular leukomalacia model

Serife Beyza Türe, Ceren Perihan Gonul, Coskun Armagan, Yusuf Guducu, Bora Tastan, Funda Erdogan, Sermin Genc

FENS Forum 2024

ePoster

Cinchonidine, an alkaloid derived from Cinchona, demonstrates neuroprotective properties against ischemic brain injury by enhancing cellular protection in cerebral endothelial cells

Cheng-ying Hsieh, Kuan-Jung Lu

FENS Forum 2024

ePoster

Integrative properties of bursting vs. regular firing subiculum neurons investigated via dynamic clamp

Melinda Gazdik, Petra Varró, Attila Szűcs

FENS Forum 2024

ePoster

Molecular changes underlying decay of sensory responses and enhanced seizure propensity in peritumoral neurons

Elisa De Santis, Elena Tantillo, Marta Scalera, Nicolò Meneghetti, Chiara Cerri, Michele Menicagli, Alberto Mazzoni, Mario Costa, Chiara Maria Mazzanti, Eleonora Vannini, Matteo Caleo

FENS Forum 2024

ePoster

Perceptual filling-in reflects response properties of early visual cortex

Anna Razafindrahaba, Kenshu Koiso, Vincent van de Ven, Peter de Weerd, Federico de Martino, Mark Roberts

FENS Forum 2024

ePoster

Polymeric nanoparticles as drug delivery tools for brain degenerative disorders: In vitro assessment and drug release properties

Rebecca Maher, Almudena Moreno Borrallo, Eduardo Ruiz-Hernandez, Andrew Harkin

FENS Forum 2024

ePoster

Sensory processing and membrane properties of external globus pallidus neurons in dopamine-depleted mice

Feng Du, Maya Ketzef

FENS Forum 2024

ePoster

An in silico study of local and global properties in the propagation of cortical slow waves

Javier Alegre-Cortés, Maurizio Mattia, Ramón Reig

FENS Forum 2024

ePoster

Tyrosine, a non-essential amino acid, has anti-depressive properties in chronic immobilization stress-induced mouse depression model

Hyun Joon Kim, Jae Soon Kang, Ji Hyeong Baek, Hye Jin Chung, Dong Kun Lee, Sang Won Park

FENS Forum 2024

ePoster

In vitro assessment of the neural regenerative properties of regenerative macrophages (REMaST®)

Giulia Pruonto, Ludovica Sagripanti, Ilaria Barone, Alessia Amenta, Sissi Dolci, Eros Rossi, Loris Mannino, Francesca Ciarpella, Nicola Piazza, Benedetta Savino, Patrizia Bossolasco, Guido Francesco Fumagalli, Massimo Locati, Ilaria Decimo, Francesco Bifari

FENS Forum 2024