All Content

46 items

Seminar

Decoding stress vulnerability

Stamatina Tzanoulinou· University of Lausanne, Faculty of Biology and Medicine, Department of Biomedical Sciences

Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.

Date

Feb 20, 2026

Seminar

Untitled Seminar

Prof. Hiryu Shizuko· Doshisha University, Kyoto, Japan

Date

Jan 21, 2026

Dr. Yao Chen’s Laboratory in the Department of Neuroscience at Washington University School of Medicine is seeking a motivated and curious scientist for a full-time senior scientist position. Our laboratory conducts fundamental research to understand how dynamics of molecular signals contribute to neuromodulator actions and sleep functions. We employ a wide variety of techniques ex vivo and in vivo, including advanced microscopy, electrophysiology, molecular biology, and behavior analysis. The principal investigator is committed to fostering a lab culture that promotes equity, kindness, rigor, and creativity This position collaborates on designing, conducting and reporting of research projects.

Date

Jan 14, 2026

Gain expertise in rodent electrophysiology and behavior studying thalamic cellular and network mechanisms of sleep and memory consolidation. We have several openings to study the mechanisms of synaptic plasticity and cellular spike dynamics that contribute to episodic memory consolidation during sleep. Trainees will gain expertise in systems neuroscience using electrophysiology (cell ensemble and LFP recording) and behavior in rats, as well as expertise on the thalamic molecular and cellular mechanisms underlying normal and disrupted sleep-dependent memory consolidation and the use of non-invasive technologies to regulate them. Some of the projects are part of collaborations with Harvard University and the Scripps Florida Institute.

Date

Jan 14, 2026

The Oldenburg lab combines optics, multiphoton optogenetics, calcium imaging, and computation to understand the motor system. The overall goal of the Oldenburg Lab is to understand the causal relationship between neural activity and motor actions. We use advanced optical techniques such as multiphoton holographic optogenetics to control neural activity with an incredible degree of precision, writing complex patterns of activity to distributed groups of cells. Only by writing activity into the brain at the scale in which it naturally occurs (individual neurons firing distinct patterns of action potentials) can we test theories of what population activity means. We read out the effects of these precise manipulations locally with calcium imaging, in neighboring brain regions with electrophysiology, and at the 'whole animal level' through changes in behavior. We are looking for curious motivated, and talented people with a wide range of skill sets to join our group at all levels from Technician to Postdoc.

Date

Jan 14, 2026

The Williamson Laboratory investigates the organization and function of auditory cortical projection systems in behaving mice. We use a variety of state-of-the-art tools to probe the neural circuits of awake mice – these include two-photon calcium imaging and high-channel count electrophysiology (both with single-cell optogenetic perturbations), head-fixed behaviors (including virtual reality), and statistical approaches for neural characterization. Details on the research focus and approaches of the laboratory can be found here: https://www.williamsonlaboratory.com/research/

Date

Jan 14, 2026

Applications are invited for a postdoctoral researcher position funded by the Wellcome Trust. The successful applicant will pursue a research project with the goal of understanding how brain-wide neural circuits lead to flexible cognitive behaviours in mice. The techniques employed will include chronic in-vivo two photon calcium imaging of multiple cell classes, targeted optogenetic manipulations, viral vector based functional circuit mapping, and quantitative mouse behaviour. The successful applicant will benefit from the collaborative culture of the Centre for Developmental Neurobiology at King’s College London and will have the opportunity to develop collaborations with groups studying animal models of brain disorders. Candidates must have a strong research track record. Experience with in-vivo two photon imaging, rodent behaviour and analysis of complex datasets will be highly valued. Candidates with programming skills are encouraged to apply.

Date

Jan 14, 2026

The López-Bendito Lab is interested in understanding and uncovering the principles underlying the development of sensory circuits with emphasis on the role of the thalamus in the development of cortical sensory maps. Furthermore, we are developing strategies for circuit restoration in sensory deprived mice. We are seeking for two (2) highly motivated postdoctoral scientists to investigate the cellular and molecular mechanisms involved in sensory circuit glia-to-neuron reprogramming. This 3-years project funded by La Caixa Foundation aims to understand the rules for region-specific reprogramming with the ultimate goal of recovery sensory thalamocortical circuits in sensory deprived mice. Applicants should have a proven track record and an independent working style.

Date

Jan 14, 2026

Sleep expert with a Ph.D. degree in Neuroscience, Psychology, Biomedical Engineering or similar.

Date

Jan 14, 2026

Join our unique transatlantic PhD program in neuroscience! The International Max Planck Research School (IMPRS) for Brain and Behavior is a unique transatlantic collaboration between two Max Planck Neuroscience institutes – the Max Planck-associated research center caesar and the Max Planck Florida Institute for Neuroscience – and the partner universities, University of Bonn and Florida Atlantic University. It offers a completely funded international PhD program in neuroscience in either Bonn, Germany, or Jupiter, Florida. We offer an exciting opportunity to outstanding Bachelor's and/or Master's degree holders (or equivalent) from any field (life sciences, mathematics, physics, computer science, engineering, etc.) to be immersed in a stimulating environment that provides novel technologies to elucidate the function of brain circuits from molecules to animal behavior. The comprehensive and diverse expertise of the faculty in the exploration of brain-circuit function using advanced imaging and optogenetic techniques combined with comprehensive training in fundamental neurobiology will provide students with an exceptional level of knowledge to pursue a successful independent research career. Apply to Bonn, Germany by November 15, 2020 or to Florida, USA by December 1, 2020!

Date

Jan 14, 2026

The lab is looking for a motivated postdoctoal research fellow as part on our multi-university project to investigate the neural underpinnings of causal inference (see https://drugowitschlab.hms.harvard.edu/news/postdoctoral-predoctoral-and-data-scientist-positions-multi-university-project). The successful candidate will work on robust implementations of neural data analysis and visualization methods, based on contemporary approaches in probabilistic machine learning, and will develop new data analysis methods, again based on probabilistic machine learning, that are suitable for the large, high-dimensional, neural datasets generated by the project. They will also lead the initial setup, and continual customization of the DataJoint-based data neural data sharing and analysis platform that will be used by all project members. We expect suitable applicants to have a Doctorate in a related discipline (including neuroscience, computer science, engineering, physics, or similar). We also expect good knowledge of the Python programming language, as well as experience in Unix/Linux, and the use of databases, in particular SQL. Expertise in handling and analyzing high-dimensional data, especially neural data, would be a plus, as would be experience in the implementation and use of probabilistic machine learning methods/models. The successful candidate will also enjoy, and be good at, working in teams at the interface of experimental and theoretical neuroscience. If interested, please contact Jan Drugowitsch (jan_drugowitsch@hms.harvard.edu), or apply through Indeed (https://www.indeed.com/job/postdoctoral-predoctoral-and-data-scientist-positions-causal-inference-e2d1b547bec4c131), while making explicit in the cover letter the position they are applying to.

Date

Jan 14, 2026

The Jaramillo lab investigates the neural basis of expectation, attention, decision-making and learning in the context of sound-driven behaviors in mice. Projects during the postdoctoral fellowship will study these cognitive processes by monitoring and manipulating neuronal activity during adaptive behaviors with cell-type and pathway specificity using techniques such as two-photon microscopy (including mesoscope imaging), high-density electrophysiology (using Neuropixels probes), and optogenetic manipulation of neural activity.

Date

Jan 14, 2026

Seminar

sensorimotor control, mouvement, touch, EEG

Marieva Vlachou· Institut des Sciences du Mouvement Etienne Jules Marey, Aix-Marseille Université/CNRS, France

Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.

Date

Dec 19, 2025

Seminar

Consciousness at the edge of chaos

Martin Monti· University of California Los Angeles

Over the last 20 years, neuroimaging and electrophysiology techniques have become central to understanding the mechanisms that accompany loss and recovery of consciousness. Much of this research is performed in the context of healthy individuals with neurotypical brain dynamics. Yet, a true understanding of how consciousness emerges from the joint action of neurons has to account for how severely pathological brains, often showing phenotypes typical of unconsciousness, can nonetheless generate a subjective viewpoint. In this presentation, I will start from the context of Disorders of Consciousness and will discuss recent work aimed at finding generalizable signatures of consciousness that are reliable across a spectrum of brain electrophysiological phenotypes focusing in particular on the notion of edge-of-chaos criticality.

Date

Dec 13, 2025

Seminar

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco· Hertie Institute for Clinical Brain Research, Germany

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

Date

Dec 10, 2025

Seminar

Developmental emergence of personality

Bassem Hassan· Paris Brain Institute, ICM, France

The Nature versus Nurture debate has generally been considered from the lens of genome versus experience dichotomy and has dominated our thinking about behavioral individuality and personality traits. In contrast, the role of nonheritable noise during brain development in behavioral variation is understudied. Using the Drosophila melanogaster visual system, I will discuss our efforts to dissect how individuality in circuit wiring emerges during development, and how that helps generate individual behavioral variation.

Date

Dec 10, 2025

Seminar

A human stem cell-derived organoid model of the trigeminal ganglion

Oliver Harschnitz· Human Technopole, Milan, Italy

Date

Dec 8, 2025

Date

Dec 4, 2025

Date

Dec 4, 2025

Date

Dec 1, 2025

Date

Nov 13, 2025

Seminar

Top-down control of neocortical threat memory

Prof. Dr. Johannes Letzkus· Universität Freiburg, Germany

Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.

Date

Nov 12, 2025

Date

Nov 6, 2025

Date

Nov 6, 2025

Seminar

Biomolecular condensates as drivers of neuroinflammation

Steven Boeynaems· Department of Molecular and Human Genetics, Baylor College of Medicine Duncan Neurological Research Institute, Texas Children's Hospital, USA

Date

Nov 4, 2025

Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions

Date

Nov 3, 2025

The cortex comprises many neuronal types, which can be distinguished by their transcriptomes: the sets of genes they express. Little is known about the in vivo activity of these cell types, particularly as regards the structure of their spike trains, which might provide clues to cortical circuit function. To address this question, we used Neuropixels electrodes to record layer 5 excitatory populations in mouse V1, then transcriptomically identified the recorded cell types. To do so, we performed a subsequent recording of the same cells using 2-photon (2p) calcium imaging, identifying neurons between the two recording modalities by fingerprinting their responses to a “zebra noise” stimulus and estimating the path of the electrode through the 2p stack with a probabilistic method. We then cut brain slices and performed in situ transcriptomics to localize ~300 genes using coppaFISH3d, a new open source method, and aligned the transcriptomic data to the 2p stack. Analysis of the data is ongoing, and suggests substantial differences in spike time coordination between ET and IT neurons, as well as between transcriptomic subtypes of both these excitatory types.

Date

Oct 29, 2025

Seminar

Generation and use of internal models of the world to guide flexible behavior

Antonio Fernandez-Ruiz· Cornell University, USA

Date

Oct 27, 2025

Seminar

NF1 exon 51 alternative splicing: functional implications in Central Nervous System (CNS) Cells

Charoula Peta· Biomedical research Foundation of the Academy of Athens

Date

Oct 22, 2025

Functional connectomics reveals general wiring rule in mouse visual cortex

Date

Oct 21, 2025

Conference

COSYNE 2025

The COSYNE 2025 conference was held in Montreal with post-conference workshops in Mont-Tremblant, continuing to provide a premier forum for computational and systems neuroscience. Attendees exchanged cutting-edge research in a single-track main meeting and in-depth specialized workshops, reflecting Cosyne’s mission to understand how neural systems function:contentReference[oaicite:6]{index=6}:contentReference[oaicite:7]{index=7}.

Date

Mar 27, 2025

Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange:contentReference[oaicite:8]{index=8}. Bernstein Conference 2024, held in Frankfurt am Main, featured discussions, keynote lectures, and poster sessions, and has established itself as one of the most renowned conferences worldwide in this field:contentReference[oaicite:9]{index=9}:contentReference[oaicite:10]{index=10}.

Date

Sep 29, 2024

Organised by FENS in partnership with the Austrian Neuroscience Association and the Hungarian Neuroscience Society, the FENS Forum 2024 will take place on 25–29 June 2024 in Vienna, Austria:contentReference[oaicite:0]{index=0}. The FENS Forum is Europe’s largest neuroscience congress, covering all areas of neuroscience from basic to translational research:contentReference[oaicite:1]{index=1}.

Date

Jun 25, 2024

ePoster

Wake-like Skin Patterning and Neural Activity During Octopus Sleep

Tomoyuki Mano, Aditi Pophale, Kazumichi Shimizu, Teresa Iglesias, Kerry Martin, Makoto Hiroi, Keishu Asada, Paulette García Andaluz, Thi Thu Van Dinh, Leenoy Meshulam, Sam Reiter

While sleeping, many vertebrate groups alternate between at least two sleep stages: rapid eye movement (REM) and slow wave sleep (SWS), in part characterized by wake-like and synchronous brain activity respectively. Sleep stage alternation has been implicated in learning and memory function experimentally1, and has motivated several techniques in training artificial neural networks2. If the functions ascribed to 2-stage sleep are truly general, one might expect to find similar phenomena outside the vertebrate lineage. Here we delineate neural and behavioral correlates of 2-stage sleep in octopuses, marine invertebrates which evolutionarily diverged from vertebrates ~550 MYA and have independently evolved large brains and behavioral sophistication. Octopus sleep is rhythmically interrupted by ~60 second bouts of pronounced body movements and rapid changes in their neurally controlled skin patterns. We show that this constitutes a distinct ‘active’ sleep stage, being homeostatically regulated, rapidly reversible, and coming with increased arousal threshold. Neuropixels recordings from the octopus central brain reveal that local field potential (LFP) activity during active sleep resembles that of waking. LFP activity differs across brain regions, with the strongest activity during active sleep seen in the Superior Frontal and Vertical lobes, anatomically connected regions associated with learning and memory function. During ‘quiet’ sleep, these regions are relatively silent but generate LFP oscillations resembling mammalian sleep spindles in frequency and duration. Computational analysis reveals the rich skin pattern dynamics of active sleep, which move through states strongly resembling waking skin patterns. The range of similarities with vertebrates implies that aspects of 2-stage sleep in octopuses may represent convergent features of complex cognition.

Date

Mar 12, 2023

ePoster

Tuned inhibition explains strong correlations across segregated excitatory subnetworks

Matthew Getz, Gregory Handy, Alex Negrón, Brent Doiron

Understanding the basis of shared, across trial fluctuations in neural activity in mammalian cortex is critical to uncovering the nature of information processing in the brain. This correlated variability has often been related to the structure of cortical connectivity since variability not accounted for by signal changes likely arises from local circuit inputs. However, recent recordings from segregated networks of excitatory neurons in mouse primary visual cortex (V1) complicate this relationship. These results found that despite weak cross-network connection probability, noise correlations were significantly larger than one would expect. We aim to explore possible circuit mechanisms responsible for these enhanced positive correlations through biologically motivated cortical network models, with the hypothesis that they arise from unobserved inhibitory neurons. In particular, we consider networks with weakly interconnected excitatory populations, but either global or subpopulation-specific inhibitory populations. We then ask how correlations can be enhanced or marred via the strength of outgoing and incoming connections to these inhibitory populations. By performing a pathway expansion of the covariance matrix, we find that a single inhibitory population with sufficiently strong I to E connections can lead to stronger than expected positive correlations across excitatory populations. However, this result is highly parameter dependent. When considering an inhibition-stabilized network (ISN) the viable parameter regime shrinks dramatically into a narrow band close to the edge of stability. We find that both non-ISN and ISN regimes can recover the ability to robustly explain the experimental results by allowing for two tuned inhibitory populations, meaning that each inhibitory population preferentially connects to one of the two excitatory populations. Our results therefore imply that complexity in excitation should be mirrored by complexity in the structure of inhibition.

Date

Mar 12, 2023

Distinguishing near and far visual cues is an essential computation that animals must carry out to guide behavior using vision. When animals move, self-motion creates motion parallax — an important but poorly understood source of depth information — whereby the speed of optic flow generated by self-motion depends on the depth of visual cues. This enables animals to estimate depth by comparing visual motion and self-motion speeds. As neurons in the mouse primary visual cortex (V1) are broadly modulated by locomotion, we hypothesized that they may integrate visual- and locomotion-related signals to estimate depth from motion parallax. To test this hypothesis, we designed a virtual reality (VR) environment for mice, where visual cues were presented at different virtual distances from the mouse and motion parallax was the only cue for depth, and recorded neuronal activity in V1 using two-photon calcium imaging. We found that the majority of excitatory neurons in layer 2/3 of V1 were selective for virtual depth. Neurons with different depth preferences were spatially intermingled, with nearby cells often tuned for disparate depths. Moreover, depth tuning could not be fully accounted for by either running speed or optic flow speed tuning in isolation, but arose from the integration of both signals. Specifically, depth selectivity of V1 neurons was explained by the ratio of preferred running and optic flow speeds. Finally, many neurons responded selectively to visual stimuli presented at a specific retinotopic location and virtual depth, demonstrating that during active locomotion V1 neuronal responses can be characterized by three-dimensional receptive fields. These results challenge the traditional view of V1 as a feed-forward filter bank, and suggest that the widespread modulation of V1 neurons by locomotion and other movements plays an essential role in estimation of depth from motion parallax.

Date

Mar 12, 2023

ePoster

Visuomotor Association Orthogonalizes Visual Cortical Population Codes

Samuel Failor, Matteo Carandini, Kenneth Harris

Stimuli trigger a pattern of activity across neurons in cortex, whose firing rates define a stimulus's representation in a high-dimensional vector space. Learning a visuomotor task can affect the responses of visual cortical neurons, but how and why training modifies population-level representations is unclear. One hypothesis is that representational plasticity in visual cortex facilitates visuomotor associations by downstream motor systems. Learning systems exhibit "inductive biases", meaning they form some stimulus-motor associations more easily than others. An animal's inductive biases presumably reflect its neuronal representations; its ability to form distinct motor associations for different stimuli depends on the representational similarity of the stimuli. Thus, the plasticity of sensory cortical representations may change inductive bias: for an animal to make different associations to two stimuli, the cortical representations of the stimuli must differentiate, such as if the evoked firing vectors were orthogonalized. A second hypothesis is that task training increases the fidelity of stimulus coding in sensory cortex, which improves decoding accuracy by downstream regions. However, this hypothesis presupposes that the population code in naive cortex suffers from low fidelity, which recent recordings of large cortical populations have questioned. We used two-photon calcium imaging to study how the tuning of V1 populations changes after mice learn to associate opposing actions with differently oriented gratings. Training did not improve the fidelity of stimulus coding, as it was already perfect in naive animals thanks to a subpopulation of highly reliable neurons. Instead, training caused the population's responses to motor-associated stimuli to become more orthogonal. The basis of this training-evoked orthogonalization was the sparsening of stimulus representations, an effect which could be summarized by a simple nonlinear transformation of naive neuronal firing rates and whose convexity was largest for motor-associated stimuli.

Date

Mar 12, 2023

When foraging in dynamic and uncertain environments, animals can benefit from basing their decisions on smart inferences about hidden properties of the world. Typical theoretical approaches for understanding the strategies that animals use in such settings combine Bayesian inference and value iteration to derive optimal behavioral policies that maximize total reward given changing beliefs about the environment. However, specifying these beliefs requires infinite numerical precision; with limited resources, this problem can no longer be decomposed into the separate steps of optimizing inference and optimizing action selection. To understand the space of behavioral policies in this constrained setting, we enumerate and evaluate all possible behavioral programs that can be constructed from just a handful of states. We show that only a small fraction of the top-performing programs can be constructed by approximating Bayesian inference; the remaining programs are structurally or even functionally distinct from Bayesian. To assess structural and functional relationships among all programs, we developed novel tree-embedding algorithms; these embeddings, which are capable of extracting different relational structures within the program space, reveal that nearly all good programs are closely connected through single algorithmic “mutations”. We demonstrate how one can use such relational structures to efficiently search for good solutions via an evolutionary algorithm. Moreover, these embeddings reveal that the diversity of non-Bayesian behaviors originates from a handful of key mutations that broaden the functional repertoire within the space of good programs. The fact that this diversity of non-optimal behavior does not significantly compromise performance suggests that these same strategies might generalize across tasks.

Date

Mar 12, 2023

ePoster

The vanishing dopamine in Parkinson's disease

Chaitanya Chintaluri & Tim P Vogels

Parkinson's disease (PD), characterized by the absence of dopamine in the striatum[1], is caused by the death of the substantia nigra pars compacta dopamine (SNcDA) neurons in the mid-brain. The cause of this cell loss is attributed to irreparable damage due to a dysregulation cascade originating from excess cytosolic dopamine[2]. However, it is unresolved if dopamine dysregulation in SNcDA neurons themselves is the cause of PD or if it is a mere symptom. Here, we introduce a theory of specialized non-causal action potentials that serve metabolic homeostasis called `metabolic spikes' which can account for spontaneous activity observed in many neuron types including SNcDA. We propose that loss of these metabolic spikes in SNcDA can account for both, the cause of PD and the subsequent dopamine dysregulation. Neurons, presumably in anticipation of synaptic inputs, keep their ATP levels at a maximum such that they are ATP-surplus/ADP-scarce during synaptic quiescence. With ADP availability as the rate-limiting step, ATP production stalls in their mitochondria when energy consumption is low, leading to the formation of toxic Reactive Oxygen Species(ROS). Under these circumstances, `metabolic spikes’ serve to restore ATP production and relieve ROS toxicity. In a metabolism-coupled model of SNcDA that senses ROS and initiates spikes, we identified three categories of deficits that could decrease metabolic spikes and consequently deplete the dopamine tone seen in PD. Importantly in PD, such lowered extracellular dopamine level is misread by D2-autoreceptors and dopamine synthesis is increased. With dopamine vesicles being already full, excess dopamine produces disruptive aldehyde (DOPAL) leading to dysregulation and ultimately cell death. Metabolic spikes, though relevant for cellular health, may thus be an integrated neuronal mechanism that operates in synergy with synaptic integration and forms a basic principle of network dynamics and behaviour, as exemplified in PD.

Date

Mar 12, 2023

ePoster

Variable syllable context depth in Bengalese finch songs: A Bayesian sequence model

Noémi Éltető, Lena Veit, Avani Koparkar, Peter Dayan

Birdsong is an important model for vocal learning and sequential motor behavior. Similarly to human language, songs, notably those of Bengalese finches and canaries, exhibit higher-order sequence structure, meaning that the statistics of one syllable may depend on a number of previous syllables. However, this number (the context depth) varies in a manner that has challenged previous formal approaches. Here we used a hierarchical non-parametric Bayesian sequence model (based on Teh, 2006; Elteto et al., 2022) that seamlessly combines predictive information from shorter and longer contexts of previous syllables, weighing them proportionally to their predictive power. We fit our model to songs of 8 different Bengalese finches, each with > 300 song bouts (Veit et al., 2021). The model inferred the context depth, showing that it varied substantially, with some syllables depending just on one deterministic predecessor, but others depending on $>10$ previous syllables. Underlying this variability was syllables forming alternating and repeating chunks, i.e. strings of fixed subsequences. When fitted at the chunk-level, our model revealed different chunk-motifs that characterize how bouts typically start, unfold, and end. The model was also able to predict the flexibility with which birds can learn to switch between syllable transitions based on external cues.

Date

Mar 12, 2023

Network models are often designed to capture selective aspects of cortical circuits. On one end, mechanistic models such as balanced spiking networks resemble activity regimes observed in data, but are often limited to simple computations. On the other end, functional models like trained deep networks can show comparable performance and dynamical motifs, but are far removed from experimental physiology. Here, we put forth a new framework for excitatory-inhibitory spiking networks which retains key properties of both mechanistic and functional models. Based on previous studies of the geometry of spike-coding networks, we consider a population of spiking neurons with low-rank connectivity, allowing each neuron’s threshold to be cast as a boundary in a space of population modes, or latent variables. Each neuron’s boundary divides this latent space into subthreshold and suprathreshold areas, which determines its contribution to the input-output function of the network. Then, incorporating Dale’s law as a connectivity constraint, we demonstrate how a network of inhibitory (I) neurons forms a convex, stable boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, unstable boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries. The resultant E/I networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical dynamics. Moreover, the latent variables can be mapped onto a constrained optimization problem, and are capable of universal function approximation. The combination of these dynamical and functional properties leads to unique insights, including specified computational roles for E/I balance and Dale’s law. Finally, the intuitive geometry of the representations, plus the link to constrained optimization, makes our framework a promising candidate for scalable and interpretable computation in biologically-plausible spiking networks.

Date

Mar 12, 2023

ePoster

Traveling UP states in the post-subiculum reveal an anatomical gradient of intrinsic properties

Dhruv Mehrotra, Daniel Levenstein, Adrian Duszkiewicz, Sam Booker, Angelika Kwiatkowska, Adrien Peyrache

Cortical activity is characterized by state-specific dynamics arising from the interplay between connectivity, cellular diversity, and intrinsic properties. During non-Rapid Eye Movement (NREM) sleep, cortical population activity alternates between periods of neuronal firing (“UP” states) and neuronal silence (“DOWN” states). Patterns of neuronal activity at DOWN-to-UP (DU) transitions have functional relevance beyond sleep: they are related to sensory coding during wakefulness and support homeostatic processes and memory consolidation. Despite this functional importance, the factors that organize these spiking patterns remain unknown but mechanisms that rely on network connectivity or intrinsic excitability have been proposed. In order to elucidate the mechanisms that organize spontaneous activity, we recorded populations of neurons in the head-direction cortex (HDC, i.e., post-subiculum), where the behavioral correlates of most neurons are well accounted for. Neuronal tuning to HD was independent of anatomical position. However, while UP-DOWN (UD) transitions were synchronous along the dorsoventral (DV) axis, we observed sequential activation of neurons at DU transitions. To understand the mechanisms underlying these traveling waves at UP state onset, we built a computational model with a linear array of recurrently connected adapting units and compared the effects of different biophysical gradients. We found that, unlike gradients in local connectivity, excitability/input, and adaptive current, a gradient in rectifying current (Ih) was able to uniquely reproduce the experimental observations, and predict a yet-unobserved relationship between UP onset and post-DOWN rebound activity. Subsequent ex vivo intracellular recordings confirmed the predicted DV gradient of Ih in HDC. In conclusion, precisely organized spontaneous population activity patterns may be independent of circuit features and sensory coding but instead may only reflect intrinsic neuronal properties. Yet, the resulting traveling waves have the potential to anatomically segment computation in output structures like the medial entorhinal cortex (MEC) and indirectly, the hippocampus.

Date

Mar 12, 2023

ePoster

New tools for recording and interpreting brain-wide activity in C. elegans

Jungsoo Kim, Adam Atanas, Ziyu Wang, Eric Bueno, McCoy Becker, Di Kang, Jungyeon Park, Cassi Estrem, Talya Kramer, Saba Baskoylu, Vikash Mansingkha, Steven Flavell

Recent studies have shown that behavioral information is richly distributed across the brain, but how individual neurons across the brain encode behavior is largely unknown. However, it is challenging to record/interpret high quality single neuron traces across the brain. Here, we describe new hardware/software and analytical tools to address this. We engineered a microscope to record brain-wide activity and behavior of freely-moving C. elegans, wrote new image analysis software, and developed a probabilistic encoder model to delineate how each neuron encodes specific behavioral features. Extracting high quality neural traces from images is difficult due to the large amount of deformations as animals freely move. In our automatic extraction software, we construct registration graphs based on posture similarity, connecting timepoints that are similar enough to perform volumetric registration. Then using a custom clustering method, we link the ROIs over time to construct a neural time series for each neuron. The behavioral features are automatically extracted using custom neural networks and other computer vision methods. Overall, the new system gives us ~10x SNR improvement compared to the previous best system. We next constructed a single-neuron encoder model based on our observations which demonstrates that C. elegans neurons can: encode behavior at multiple timescales (e.g. current vs recent); encode multiple behaviors (i.e. “mixed selectivity”); and display rectified encodings based on locomotion state. To be able to interpret the fitted parameters, we used probabilistic inference (a resample move sequential Monte Carlo inference algorithm validated with simulation based calibration), which gives a posterior distribution for each parameter instead of a single estimate. Overall, this allows us to determine the precise tuning of how each neuron encodes different behavioral features. These new tools enable high-SNR brain-wide recordings from moving animals and interpretable modeling of how neurons across the brain encode behavior.

Date

Mar 12, 2023

Conference

COSYNE 2023

The COSYNE 2023 conference provided an inclusive forum for exchanging experimental and theoretical approaches to problems in systems neuroscience, continuing the tradition of bringing together the computational neuroscience community:contentReference[oaicite:5]{index=5}. The main meeting was held in Montreal followed by post-conference workshops in Mont-Tremblant, fostering intensive discussions and collaboration.

Date

Mar 9, 2023

Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.

Date

Sep 27, 2022

Conference

COSYNE 2022

The annual Cosyne meeting provides an inclusive forum for the exchange of empirical and theoretical approaches to problems in systems neuroscience, in order to understand how neural systems function:contentReference[oaicite:2]{index=2}. The main meeting is single-track, with invited talks selected by the Executive Committee and additional talks and posters selected by the Program Committee based on submitted abstracts:contentReference[oaicite:3]{index=3}. The workshops feature in-depth discussion of current topics of interest in a small group setting:contentReference[oaicite:4]{index=4}.

Date

Mar 17, 2022