Neural Circuits
neural circuits
Pedro Goncalves
The Gonçalves lab is a recently founded research group at the Neuro-Electronics Flanders (NERF), Belgium, co-affiliated with the VIB Center for AI & Computational Biology. We are currently exploring a range of exciting topics at the intersection between computational neuroscience and probabilistic machine learning. In particular, we develop machine learning methods to derive mechanistic insights from neuroscience data and apply them to challenging neuroscience problems: from the retrieval of complex input-output functions of biophysically-detailed single neurons to the full characterisation of mechanisms of compensation for perturbations in neural circuits. We work in an interdisciplinary, collaborative, and supportive work environment, which emphasizes diversity and inclusion. NERF is a joint research initiative by imec, VIB and KU Leuven. We are looking for a PhD and a postdoc candidates interested in developing machine learning methods and applying them to neuroscience problems. There will be flexibility to customise the project and ample opportunities to collaborate with top experimental and theoretical partners locally and internationally. More details about the positions and the lab can be found at https://jobso.id/hz2b https://jobso.id/hz2e
University of Chicago - Grossman Center for Quantitative Biology and Human Behavior
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience.
Dr. Tom Franken
A postdoctoral position is available in Dr. Tom Franken’s laboratory in the Department of Neuroscience at the Washington University School of Medicine in St. Louis. The project will study the neural circuits that parse visual scenes into organized collections of objects. We use a variety of techniques including high-density electrophysiology, behavior, optogenetics, and viral targeting in non-human primates. For more information on the lab, please visit sites.wustl.edu/frankenlab/. The PI is committed to mentoring and to nurturing a creative, thoughtful and collaborative lab culture. The laboratory is in an academic setting in the Department of Neuroscience at the Washington University School of Medicine in St. Louis, a large and collaborative scientific community. This provides an ideal environment to train, conduct research, and launch a career in science. Postdoctoral appointees at Washington University receive a competitive salary and a generous benefits package (hr.wustl.edu/benefits/). WashU Neuroscience is consistently ranked as one of the top 10 places worldwide for neuroscience research and offers an outstanding interdisciplinary training environment for early career researchers. In addition to high-quality research facilities, career and professional development training for postdoctoral researchers is provided through the Career Center, Teaching Center, Office of Postdoctoral Affairs, and campus groups. St. Louis is a city rich in culture, green spaces, free museums, world-class restaurants, and thriving music and arts scenes. On top of it all, St. Louis is affordable and commuting to campus is stress-free, whether you go by foot, bike, public transit, or car. The area combines the attractions of a major city with affordable lifestyle opportunities (postdoc.wustl.edu/prospective-postdocs/why-st-louis/). Washington University is dedicated to building a diverse community of individuals who are committed to contributing to an inclusive environment – fostering respect for all and welcoming individuals from diverse backgrounds, experiences and perspectives. Individuals with a commitment to these values are encouraged to apply. Additional information on being a postdoc at Washington University in St. Louis can be found at neuroscience.wustl.edu/education/postdoctoral-research/ and postdoc.wustl.edu/prospective-postdocs. Required Qualifications Ph.D. (or equivalent doctoral) degree in neuroscience (broadly defined). Strong background in either electrophysiology, behavioral techniques or scientific programming/machine learning. Preferred Qualifications Experience with training of larger animals. Experience with electrophysiology. Experience with studies of the visual system. Ability to think creatively to solve problems. Well organized and attention to detail. Excellent oral and written communication skills. Team player with a high level of initiative and motivation. Working Conditions This position works in a laboratory environment with potential exposure to biological and chemical hazards. The individual must be physically able to wear protective equipment and to provide standard care to research animals. Salary Range Base pay is commensurate with experience. Applicant Special Instructions Applicants should submit the following materials to Dr. Tom Franken at ftom@wustl.edu: 1) A cover letter explaining how their interest in the position matches their background and career goals. 2) CV or Biosketch. 3) Contact information for at least three professional references. Accommodation If you are unable to use our online application system and would like an accommodation, please email CandidateQuestions@wustl.edu or call the dedicated accommodation inquiry number at 314-935-1149 and leave a voicemail with the nature of your request. Pre-Employment Screening All external candidates receiving an offer for employment will be required to submit to pre-employment screening for this position. The screenings will include criminal background check and, as applicable for the position, other background checks, drug screen, an employment and education or licensure/certification verification, physical examination, certain vaccinations and/or governmental registry checks. All offers are contingent upon successful completion of required screening. Benefits Statement Washington University in St. Louis is committed to providing a comprehensive and competitive benefits package to our employees. Benefits eligibility is subject to employment status, full-time equivalent (FTE) workload, and weekly standard hours. Please visit our website at https://hr.wustl.edu/benefits/ to view a summary of benefits. EEO/AA Statement Washington University in St. Louis is committed to the principles and practices of equal employment opportunity and especially encourages applications by those from underrepresented groups. It is the University’s policy to provide equal opportunity and access to persons in all job titles without regard to race, ethnicity, color, national origin, age, religion, sex, sexual orientation, gender identity or expression, disability, protected veteran status, or genetic information. Diversity Statement Washington University is dedicated to building a diverse community of individuals who are committed to contributing to an inclusive environment – fostering respect for all and welcoming individuals from diverse backgrounds, experiences and perspectives. Individuals with a commitment to these values are encouraged to apply.
Dr Marc Aurel Busche & Prof David Sharp
This is a joint postdoctoral position between Prof David Sharp’s laboratory (based at the UK DRI CR&T Centre), focused on the long-term neurodegenerative effects of traumatic brain injury, and Dr Marc Aurel Busche’s laboratory (based at the UK DRI at UCL), which has been at the forefront of developing tools permitting multi-scale and multi-modal monitoring of large-scale neural circuits in models of dementia. The main goal of the project will be to examine the effects of traumatic brain injury on neuronal circuit and neurovascular function in vivo, how this may accelerate molecular and cellular processes linked to Alzheimer’s Disease (the most common cause of dementia) and determine whether the pathophysiology is reversible. The project will involve recording neuronal activity and vascular dynamics using state of the art two-photon and electrophysiological (Neuropixels) methods and also linking this to available human datasets (e.g., fMRI). The successful candidate will be self-directed with excellent research skills, and capable of working collaboratively within a team of international multidisciplinary researchers, while displaying independent thinking and initiative. This is an outstanding opportunity to work independently on a high impact, state-of-the-art collaborative and cross-species project in a stimulating and vibrant research environment. The post is available immediately and is funded by a UK DRI Cross-Centre Postdoctoral award for two years in the first instance. For more information, and to apply please see: https://bit.ly/3qOulVp
Emre Yaksi
Interested applying machine learning, applied mathematics & data science tools for analyzing neural connectivity and sequential activation of neural ensembles associated with sensory computations and learning? We are hiring a PhD student ! We offer an excellent and collegial research environment at Kavli Institute for Systems Neuroscience and Norwegian University of Science and Technology (NTNU), in addition to high life-standards of Norway, surrounded by spectacular nature. The deadline for the applications is at end of May 2022. Please spread the word. Apply using this link: https://www.jobbnorge.no/en/available-jobs/job/224137/phd-candidate
Arcadia Science
A Bit About Us: We are Arcadia Science. Arcadia is a well-funded for-profit biology research and development company founded and led by scientists. Our mission is to give a community of researchers the freedom and tools to be adventurous, to discover, and to make scientific exploration financially self-sustaining in the life sciences. We are inspired by the spirit of exploration and aspire to evolve how science is done, who it attracts and rewards, and what it can achieve. Research @ Arcadia: At Arcadia, we are building an intramural research and development program that will encompass three areas: (1) emerging organismal biology, (2) enabling research technologies, and (3) translational development. Research areas will be carried out by independent scientists and those working together towards shared goals. Projects will be collaborative in nature and pursue science that is more high-risk and exploratory than typical life science research programs. We will invest heavily in creative technologies that can invent new research tools or optimize workflows for emerging organismal systems. In addition to conducting research, Arcadia scientists will drive engagement with the broader scientific community in order to maximize the impact of our work and identify research areas and needs that Arcadia may be uniquely positioned to address.
Dr. Scott Rich
The Neuron to Brain Lab is recruiting a Master’s student to contribute to our computational investigation of the role of heterogeneity in seizure resilience. This project will be directly mentored by Dr. Scott Rich, a senior postdoc under the supervision of Dr. Taufik Valiante and leader of the lab’s Computational Pillar. The project will focus on constructing a cortical neural network containing multiple populations of inhibitory interneurons, and using this network to assess how heterogeneity amongst inhibitory cells might uniquely contribute to seizure resilience. This project will utilize the lab’s unique access to electrophysiological data from live human cortical tissue to constrain neuron models, as well as a wealth of collaborations between the lab and other computational neuroscientists at the Krembil Brain Institute and the Krembil Centre for Neuroinformatics.
Hayder Amin
This position is focused on developing a real-time bidirectional Brain-Machine interfacing framework, enabling active decoding and communication between a CMOS-chip and a rodent cortico-hippocampal circuit. The successful applicant will develop and implement biomimetic electronics to mimic/integrate the spatiotemporal information transmission within a large-scale hippocampal circuitry empowered by the enhanced computational function of the newly generated neurons. The outcome will profoundly impact science and society – it would offer a better tool for understanding information coding in neural regenerative circuitry and potentially providing novel restorative treatments for neurodegenerative diseases and brain injuries. Apply here: https://jobs.dzne.de/de/jobs/60681/postdoctoral-researcher-fmd-in-biomimetic-hippocampal-prosthesis-802920211
Dr Shuzo Sakata
A full-time position of a laboratory technician is available to work with Dr Shuzo Sakata at University of Strathclyde in Glasgow, UK. This position is funded by the Medical Research Council (MRC). Our group has been investigating state-dependent and cell type-specific information processing in the brain by combining a range of techniques, including in vivo high-density electrophysiological recording, calcium imaging, optogenetics, behavioural analysis and computational approaches. In this project, we will investigate how functional interactions between neurons and astrocytes regulate the sleep-wake cycle in mice by utilising state-of-the-art genetic and neurophotonic technologies. This project will also work closely in the context of a recently established international consortium, DEEPER, funded from the EU’s Horizon 2020 (https://www.deeperproject.eu/). This full-time position is expected to assist a wide range of laboratory experiments by working as a team. In the first instance, candidates may send their application to Dr Shuzo Sakata (shuzo.sakata@strath.ac.uk), including a CV and a cover letter, detailing their educational background, lab experience, motivation for this position and their career goal.
Mario Dipoppa
The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code.
Christian Leibold
The lab of Christian Leibold invites applications of postdoc candidates on topics related to the geometry of neural manifolds. We will use spiking neural network simulations, analysis of massively parallel recordings, as well as techniques from differential geometry to understand the dynamics of the neural population code in the hippocampal formation in relation to complex cognitive behaviors. Our research group combines modelling of neural circuits with the development of machine learning techniques for data analysis. We strive for a diverse, interdisciplinary, and collaborative work environment.
I-Chun Lin
The Gatsby Unit seeks to appoint a new principal investigator with an outstanding record of research achievement and an innovative research programme in theoretical neuroscience or machine learning at any academic rank. In theoretical neuroscience, we are particularly interested in candidates who focus on the mathematical underpinnings of adaptive intelligent behaviour in animals, or develop mathematical tools and models to understand how neural circuits and systems function. In machine learning, we seek candidates who focus on the mathematical foundations of learning from data and experience, addressing fundamental questions in probabilistic or statistical machine learning and understanding; areas of particular interest include generative or probabilistic modelling, causal discovery, reinforcement learning, theory of deep learning, and links between these areas and neuroscience or cognitive science.
Geoffrey J Goodhill
An NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. A postdoc position is available in the Goodhill lab to contribute to the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong are recording from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab is analyzing the resulting huge datasets using a variety of sophisticated computational approaches, and using these results to build new theoretical models that reveal how neural circuits interact to govern sleep.
Netta Cohen
Research Fellow position: This project explores individuality of neural circuits and neural activity in C. elegans brain, based on whole-brain-activity data and information about the C. elegans connectome (neural circuit wiring data). The project combines data driven approaches from AI on the one hand, and whole-brain computational modelling on the other. PhD opening: How do worms move in 3D? To address this question, we have built a 3D imaging system and have collected hours of footage. Prior work has focused on developing machine vision methods to reconstruct postures and trajectories; characterising postures and locomotion behaviours; and characterising and modelling locomotion strategies and foraging behaviours. This PhD can build on these foundations to perform exciting innovative experiments, and/or to build computational models of worm locomotion.
Carsten Mehring
The interdisciplinary MSc program in Neuroscience at the University of Freiburg, Germany, provides theoretical and practical training in neuroscience, covering both the foundations and latest research in the field. It is taught by lecturers from an international scientific community from multiple faculties and neuroscience research centres. The modular course structure caters to the specific backgrounds and research interests of each individual student with specialisations in neural circuits and behavior, computational neuroscience and neurotechnology. All courses are taught in English.
I-Chun Lin
The Gatsby Computational Neuroscience Unit (GCNU) and Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC) are launching a 4-year joint PhD programme that aims to bridge the gap between theory and experiments. The two centres provide a unique opportunity for a critical mass of theoreticians and experimentalists to interact closely with one another and with researchers at the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments/domains such as UCL Neuroscience Domain; Artificial Intelligence; the ELLIS Unit at UCL; Computer Science; Statistical Science; and the nearby Francis Crick and Alan Turing Institutes. The joint programme will provide a rigorous preparation for an interdisciplinary research career. The programme blends aspects of the two existing PhD programmes and is designed to immerse students in both experimental and theoretical thinking. Courses in the first year provide a comprehensive introduction to systems neuroscience, theoretical/computational neuroscience, and machine learning. Students will be part of the broader trainee cohort across both centres, interacting and engaging in scientific discussion with both SWC and GCNU researchers with equal emphasis.
Hidetoshi Urakubo
We invite applications for an enthusiastic postdoctoral researcher in the area of computational neuroscience or systems biology. A new collaborative project with Kyushu U has been launched to elucidate biochemical signaling involved in the development of the olfactory system. We are working on a project to simulate how neural circuits in the brain acquire function through development. As an example, we are focusing on the process of mitral cell dendritic pruning that leads to the acquisition of odor selectivity (Fujimoto 2023, Dev Cell 58, 1221–1236). This process is governed by the coupling of biochemical signaling of small G proteins and neuronal electrical activity. In addition, the neural circuit simulation will be performed to elucidate the emergent process of odor information processing. The NEURON simulator or other platform simulators will be useful for this project.
Katharina Wilmes
We are looking for highly motivated Postdocs or PhD students, interested in computational neuroscience, specifically addressing questions concerning neural circuits underlying perception and learning. The perfect candidate has a strong background in math, physics or computer science (or equivalent), programming skills (python), and a strong interest in biological and neural systems. A background in computational neuroscience is ideal, but not mandatory. Our brain maintains an internal model of the world, based on which it can make predictions about sensory information. These predictions are useful for perception and learning in the uncertain and changing environments in which we evolved. The link between high-level normative theories and cellular-level observations of prediction errors and representations under uncertainty is still missing. The lab uses computational and mathematical tools to model cortical circuits and neural networks on different scales.
Low intensity rTMS: age dependent effects, and mechanisms underlying neural plasticity
Neuroplasticity is essential for the establishment and strengthening of neural circuits. Repetitive transcranial magnetic stimulation (rTMS) is commonly used to modulate cortical excitability and shows promise in the treatment of some neurological disorders. Low intensity magnetic stimulation (LI-rTMS), which does not directly elicit action potentials in the stimulated neurons, have also shown some therapeutic effects, and it is important to determine the biological mechanisms underlying the effects of these low intensity magnetic fields, such as would occur in the regions surrounding the central high-intensity focus of rTMS. Our team has used a focal low-intensity (10mT) magnetic stimulation approach to address some of these questions and to identify cellular mechanisms. I will present several studies from our laboratory, addressing (1) effects of LIrTMS on neuronal activity and excitability ; and (2) neuronal morphology and post-lesion repair. The ensemble of our results indicate that the effects of LI-rTMS depend upon the stimulation pattern, the age of the animal, and the presence of cellular magnetoreceptors.
Neural circuits underlying sleep structure and functions
Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
Mouse Motor Cortex Circuits and Roles in Oromanual Behavior
I’m interested in structure-function relationships in neural circuits and behavior, with a focus on motor and somatosensory areas of the mouse’s cortex involved in controlling forelimb movements. In one line of investigation, we take a bottom-up, cellularly oriented approach and use optogenetics, electrophysiology, and related slice-based methods to dissect cell-type-specific circuits of corticospinal and other neurons in forelimb motor cortex. In another, we take a top-down ethologically oriented approach and analyze the kinematics and cortical correlates of “oromanual” dexterity as mice handle food. I'll discuss recent progress on both fronts.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Untitled Seminar
Blood-brain barrier dysfunction in epilepsy: Time for translation
The neurovascular unit (NVU) consists of cerebral blood vessels, neurons, astrocytes, microglia, and pericytes. It plays a vital role in regulating blood flow and ensuring the proper functioning of neural circuits. Among other, this is made possible by the blood-brain barrier (BBB), which acts as both a physical and functional barrier. Previous studies have shown that dysfunction of the BBB is common in most neurological disorders and is associated with neural dysfunction. Our studies have demonstrated that BBB dysfunction results in the transformation of astrocytes through transforming growth factor beta (TGFβ) signaling. This leads to activation of the innate neuroinflammatory system, changes in the extracellular matrix, and pathological plasticity. These changes ultimately result in dysfunction of the cortical circuit, lower seizure threshold, and spontaneous seizures. Blocking TGFβ signaling and its associated pro-inflammatory pathway can prevent this cascade of events, reduces neuroinflammation, repairs BBB dysfunction, and prevents post-injury epilepsy, as shown in experimental rodents. To further understand and assess BBB integrity in human epilepsy, we developed a novel imaging technique that quantitatively measures BBB permeability. Our findings have confirmed that BBB dysfunction is common in patients with drug-resistant epilepsy and can assist in identifying the ictal-onset zone prior to surgery. Current clinical studies are ongoing to explore the potential of targeting BBB dysfunction as a novel treatment approach and investigate its role in drug resistance, the spread of seizures, and comorbidities associated with epilepsy.
Neural Circuits that connect Body and Mind
From primate anatomy to human neuroimaging: insights into the circuits underlying psychiatric disease and neuromodulation; Large-scale imaging of neural circuits: towards a microscopic human connectome
On Thursday, October 26th, we will host Anastasia Yendiki and Suzanne Haber. Anastasia Yendiki, PhD, is an Associate Professor in Radiology at the Harvard Medical School and an Associate Investigator at the Massachusetts General Hospital and Athinoula A. Martinos Center. Suzanne Haber, PhD, is a Professor at the University of Rochester and runs a lab at McLean hospital at Harvard Medical School in Boston. She has received numerous awards for her work on neuroanatomy. Beside her scientific presentation, she will give us a glimpse at the “Person behind the science”. The talks will be followed by a shared discussion. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!
A neuroendocrine circuit that regulates sugar feeding in mated Drosophila melanogaster females
Generating parallel representations of position and identity in the olfactory system
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
Human and Zebrafish retinal circuits: similarities in day and night
Neural circuits for vision in the natural world
The neural circuits underlying planning and movement
The smart image compression algorithm in the retina: a theoretical study of recoding inputs in neural circuits
Computation in neural circuits relies on a common set of motifs, including divergence of common inputs to parallel pathways, convergence of multiple inputs to a single neuron, and nonlinearities that select some signals over others. Convergence and circuit nonlinearities, considered individually, can lead to a loss of information about the inputs. Past work has detailed how to optimize nonlinearities and circuit weights to maximize information, but we show that selective nonlinearities, acting together with divergent and convergent circuit structure, can improve information transmission over a purely linear circuit despite the suboptimality of these components individually. These nonlinearities recode the inputs in a manner that preserves the variance among converged inputs. Our results suggest that neural circuits may be doing better than expected without finely tuned weights.
Self-perception: mechanosensation and beyond
Brain-organ communications play a crucial role in maintaining the body's physiological and psychological homeostasis, and are controlled by complex neural and hormonal systems, including the internal mechanosensory organs. However, the progress has been slow due to technical hurdles: the sensory neurons are deeply buried inside the body and are not readily accessible for direct observation, the projection patterns from different organs or body parts are complex rather than converging into dedicate brain regions, the coding principle cannot be directly adapted from that learned from conventional sensory pathways. Our lab apply the pipeline of "biophysics of receptors-cell biology of neurons-functionality of neural circuits-animal behaviors" to explore the molecular and neural mechanisms of self-perception. In the lab, we mainly focus on the following three questions: 1, The molecular and cellular basis for proprioception and interoception. 2, The circuit mechanisms of sensory coding and integration of internal and external information. 3, The function of interoception in regulating behavior homeostasis.
The strongly recurrent regime of cortical networks
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.
Neuron-glial interactions in health and disease: from cognition to cancer
In the central nervous system, neuronal activity is a critical regulator of development and plasticity. Activity-dependent proliferation of healthy glial progenitors, oligodendrocyte precursor cells (OPCs), and the consequent generation of new oligodendrocytes contributes to adaptive myelination. This plasticity of myelin tunes neural circuit function and contributes to healthy cognition. The robust mitogenic effect of neuronal activity on normal oligodendroglial precursor cells, a putative cellular origin for many forms of glioma, suggests that dysregulated or “hijacked” mechanisms of myelin plasticity might similarly promote malignant cell proliferation in this devastating group of brain cancers. Indeed, neuronal activity promotes progression of both high-grade and low-grade glioma subtypes in preclinical models. Crucial mechanisms mediating activity-regulated glioma growth include paracrine secretion of BDNF and the synaptic protein neuroligin-3 (NLGN3). NLGN3 induces multiple oncogenic signaling pathways in the cancer cell, and also promotes glutamatergic synapse formation between neurons and glioma cells. Glioma cells integrate into neural circuits synaptically through neuron-to-glioma synapses, and electrically through potassium-evoked currents that are amplified through gap-junctional coupling between tumor cells This synaptic and electrical integration of glioma into neural circuits is central to tumor progression in preclinical models. Thus, neuron-glial interactions not only modulate neural circuit structure and function in the healthy brain, but paracrine and synaptic neuron-glioma interactions also play important roles in the pathogenesis of glial cancers. The mechanistic parallels between normal and malignant neuron-glial interactions underscores the extent to which mechanisms of neurodevelopment and plasticity are subverted by malignant gliomas, and the importance of understanding the neuroscience of cancer.
Neural circuits for body movements
How do Astrocytes Sculpt Synaptic Circuits?
From symptoms to circuits in Fragile X syndrome
Neural circuits for vector processing in the insect brain
Several species of insects have been observed to perform accurate path integration, constantly updating a vector memory of their location relative to a starting position, which they can use to take a direct return path. Foraging insects such as bees and ants are also able to store and recall the vectors to return to food locations, and to take novel shortcuts between these locations. Other insects, such as dung beetles, are observed to integrate multimodal directional cues in a manner well described by vector addition. All these processes appear to be functions of the Central Complex, a highly conserved and strongly structured circuit in the insect brain. Modelling this circuit, at the single neuron level, suggests it has general capabilities for vector encoding, vector memory, vector addition and vector rotation that can support a wide range of directed and navigational behaviours.
Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity
A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.
Hypothalamic episode generators underlying the neural control of fertility
The hypothalamus controls diverse homeostatic functions including fertility. Neural episode generators are required to drive the intermittent pulsatile and surge profiles of reproductive hormone secretion that control gonadal function. Studies in genetic mouse models have been fundamental in defining the neural circuits forming these central pattern generators and the full range of in vitro and in vivo optogenetic and chemogenetic methodologies have enabled investigation into their mechanism of action. The seminar will outline studies defining the hypothalamic “GnRH pulse generator network” and current understanding of its operation to drive pulsatile hormone secretion.
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Our results obtained in the fruit fly Drosophila demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit
Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.
Zero to Birth: How the Human Brain is Built
By the time a baby is born, its brain is equipped with tens of billions of intricately crafted neurons wired together to form a compact and breathtakingly efficient supercomputer. The book is meant to give a broad audience (i.e. non-neuroscientists) a sense of the step-by-step construction of a human brain as well as our current conceptual understanding of various processes involved. The book also hopes to highlight relevance of brain development to our growing understanding of cognitive and psychological variations and syndromes. The author will talk about the book including the many challenges and rewards involved in writing it.
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
The role of astroglia-neuron interactions in generation and spread of seizures
Astroglia-neuron interactions are involved in multiple processes, regulating development, excitability and connectivity of neural circuits. Accumulating number of evidences highlight a direct connection between aberrant astroglial genetics and physiology in various forms of epilepsies. Using zebrafish seizure models, we showed that neurons and astroglia follow different spatiotemporal dynamics during transitions from pre-ictal to ictal activity. We observed that during pre-ictal period neurons exhibit local synchrony and low level of activity, whereas astroglia exhibit global synchrony and high-level of calcium signals that are anti correlated with neural activity. Instead, generalized seizures are marked by a massive release of astroglial glutamate release as well as a drastic increase of astroglia and neuronal activity and synchrony across the entire brain. Knocking out astroglial glutamate transporters leads to recurrent spontaneous generalized seizures accompanied with massive astroglial glutamate release. We are currently using a combination of genetic and pharmacological approaches to perturb astroglial glutamate signalling and astroglial gap junctions to further investigate their role in generation and spreading of epileptic seizures across the brain.
Imperial Neurotechnology 2022 - Annual Research Symposium
A diverse mix of neurotechnology talks and posters from researchers at Imperial and beyond. Visit our event page to find out more. The event is in-person but talk sessions will be broadcast via Teams.
How neural circuits organize and learn during development
To generate brain circuits that are both flexible and stable requires the coordination of powerful developmental mechanisms acting at different scales, including activity-dependent synaptic plasticity and changes in single neuron properties. The brain prepares to efficiently compute information and reliably generate behavior during early development without any prior sensory experience but through patterned spontaneous activity. After the onset of sensory experience, ongoing activity continues to modify sensory circuits, and plays an important functional role in the mature brain. Using quantitative data analysis, experiment-driven theory and computational modeling, I will present a framework for how neural circuits are built and organized during early postnatal development into functional units, and how they are modified by intact and perturbed sensory-evoked activity. Inspired by experimental data from sensory cortex, I will then show how neural circuits use the resulting non-random connectivity to flexibly gate a network’s response, providing a mechanism for routing information.
Using eye tracking to investigate neural circuits in health and disease
What the fly’s eye tells the fly’s brain…and beyond
Fly Escape Behaviors: Flexible and Modular We have identified a set of escape maneuvers performed by a fly when confronted by a looming object. These escape responses can be divided into distinct behavioral modules. Some of the modules are very stereotyped, as when the fly rapidly extends its middle legs to jump off the ground. Other modules are more complex and require the fly to combine information about both the location of the threat and its own body posture. In response to an approaching object, a fly chooses some varying subset of these behaviors to perform. We would like to understand the neural process by which a fly chooses when to perform a given escape behavior. Beyond an appealing set of behaviors, this system has two other distinct advantages for probing neural circuitry. First, the fly will perform escape behaviors even when tethered such that its head is fixed and neural activity can be imaged or monitored using electrophysiology. Second, using Drosophila as an experimental animal makes available a rich suite of genetic tools to activate, silence, or image small numbers of cells potentially involved in the behaviors. Neural Circuits for Escape Until recently, visually induced escape responses have been considered a hardwired reflex in Drosophila. White-eyed flies with deficient visual pigment will perform a stereotyped middle-leg jump in response to a light-off stimulus, and this reflexive response is known to be coordinated by the well-studied giant fiber (GF) pathway. The GFs are a pair of electrically connected, large-diameter interneurons that traverse the cervical connective. A single GF spike results in a stereotyped pattern of muscle potentials on both sides of the body that extends the fly's middle pair of legs and starts the flight motor. Recently, we have found that a fly escaping a looming object displays many more behaviors than just leg extension. Most of these behaviors could not possibly be coordinated by the known anatomy of the GF pathway. Response to a looming threat thus appears to involve activation of numerous different neural pathways, which the fly may decide if and when to employ. Our goal is to identify the descending pathways involved in coordinating these escape behaviors as well as the central brain circuits, if any, that govern their activation. Automated Single-Fly Screening We have developed a new kind of high-throughput genetic screen to automatically capture fly escape sequences and quantify individual behaviors. We use this system to perform a high-throughput genetic silencing screen to identify cell types of interest. Automation permits analysis at the level of individual fly movements, while retaining the capacity to screen through thousands of GAL4 promoter lines. Single-fly behavioral analysis is essential to detect more subtle changes in behavior during the silencing screen, and thus to identify more specific components of the contributing circuits than previously possible when screening populations of flies. Our goal is to identify candidate neurons involved in coordination and choice of escape behaviors. Measuring Neural Activity During Behavior We use whole-cell patch-clamp electrophysiology to determine the functional roles of any identified candidate neurons. Flies perform escape behaviors even when their head and thorax are immobilized for physiological recording. This allows us to link a neuron's responses directly to an action.
Trading Off Performance and Energy in Spiking Networks
Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.
Neural Circuit Mechanisms of Pattern Separation in the Dentate Gyrus
The ability to discriminate different sensory patterns by disentangling their neural representations is an important property of neural networks. While a variety of learning rules are known to be highly effective at fine-tuning synapses to achieve this, less is known about how different cell types in the brain can facilitate this process by providing architectural priors that bias the network towards sparse, selective, and discriminable representations. We studied this by simulating a neuronal network modelled on the dentate gyrus—an area characterised by sparse activity associated with pattern separation in spatial memory tasks. To test the contribution of different cell types to these functions, we presented the model with a wide dynamic range of input patterns and systematically added or removed different circuit elements. We found that recruiting feedback inhibition indirectly via recurrent excitatory neurons proved particularly helpful in disentangling patterns, and show that simple alignment principles for excitatory and inhibitory connections are a highly effective strategy.
Unchanging and changing: hardwired taste circuits and their top-down control
The taste system detects 5 major categories of ethologically relevant stimuli (sweet, bitter, umami, sour and salt) and accordingly elicits acceptance or avoidance responses. While these taste responses are innate, the taste system retains a remarkable flexibility in response to changing external and internal contexts. Taste chemicals are first recognized by dedicated taste receptor cells (TRCs) and then transmitted to the cortex via a multi-station relay. I reasoned that if I could identify taste neural substrates along this pathway, it would provide an entry to decipher how taste signals are encoded to drive innate response and modulated to facilitate adaptive response. Given the innate nature of taste responses, these neural substrates should be genetically identifiable. I therefore exploited single-cell RNA sequencing to isolate molecular markers defining taste qualities in the taste ganglion and the nucleus of the solitary tract (NST) in the brainstem, the two stations transmitting taste signals from TRCs to the brain. How taste information propagates from the ganglion to the brain is highly debated (i.e., does taste information travel in labeled-lines?). Leveraging these genetic handles, I demonstrated one-to-one correspondence between ganglion and NST neurons coding for the same taste. Importantly, inactivating one ‘line’ did not affect responses to any other taste stimuli. These results clearly showed that taste information is transmitted to the brain via labeled lines. But are these labeled lines aptly adapted to the internal state and external environment? I studied the modulation of taste signals by conflicting taste qualities in the concurrence of sweet and bitter to understand how adaptive taste responses emerge from hardwired taste circuits. Using functional imaging, anatomical tracing and circuit mapping, I found that bitter signals suppress sweet signals in the NST via top-down modulation by taste cortex and amygdala of NST taste signals. While the bitter cortical field provides direct feedback onto the NST to amplify incoming bitter signals, it exerts negative feedback via amygdala onto the incoming sweet signal in the NST. By manipulating this feedback circuit, I showed that this top-down control is functionally required for bitter evoked suppression of sweet taste. These results illustrate how the taste system uses dedicated feedback lines to finely regulate innate behavioral responses and may have implications for the context-dependent modulation of hardwired circuits in general.
Apathy and impulsivity in neurological disease – cause, effect and treatment
Modularity and Robustness of Frontal Cortical Networks
Nuo Li (Baylor College of Medicine, USA) shares novel insights into coordinated interhemispheric large-scale neural network activity underpinning short-term memory in mice. Relevant techniques covered include: simultaneous multi-regional recordings using multiple 64-channel H probes during head-fixed behavior in mice. simultaneous optogenetics and population recording. analysis of population recordings to infer interactions between brain regions. Reference: Chen G, Kang B, Lindsey J, Druckmann S, Li N, (2021). Modularity and robustness of frontal cortex networks. Cell, 184(14):3717-3730.
Neural Representations of Social Homeostasis
How does our brain rapidly determine if something is good or bad? How do we know our place within a social group? How do we know how to behave appropriately in dynamic environments with ever-changing conditions? The Tye Lab is interested in understanding how neural circuits important for driving positive and negative motivational valence (seeking pleasure or avoiding punishment) are anatomically, genetically and functionally arranged. We study the neural mechanisms that underlie a wide range of behaviors ranging from learned to innate, including social, feeding, reward-seeking and anxiety-related behaviors. We have also become interested in “social homeostasis” -- how our brains establish a preferred set-point for social contact, and how this maintains stability within a social group. How are these circuits interconnected with one another, and how are competing mechanisms orchestrated on a neural population level? We employ optogenetic, electrophysiological, electrochemical, pharmacological and imaging approaches to probe these circuits during behavior.
Neural circuits of visuospatial working memory
One elementary brain function that underlies many of our cognitive behaviors is the ability to maintain parametric information briefly in mind, in the time scale of seconds, to span delays between sensory information and actions. This component of working memory is fragile and quickly degrades with delay length. Under the assumption that behavioral delay-dependencies mark core functions of the working memory system, our goal is to find a neural circuit model that represents their neural mechanisms and apply it to research on working memory deficits in neuropsychiatric disorders. We have constrained computational models of spatial working memory with delay-dependent behavioral effects and with neural recordings in the prefrontal cortex during visuospatial working memory. I will show that a simple bump attractor model with weak inhomogeneities and short-term plasticity mechanisms can link neural data with fine-grained behavioral output in a trial-by-trial basis and account for the main delay-dependent limitations of working memory: precision, cardinal repulsion biases and serial dependence. I will finally present data from participants with neuropsychiatric disorders that suggest that serial dependence in working memory is specifically altered, and I will use the model to infer the possible neural mechanisms affected.
Extrinsic control and autonomous computation in the hippocampal CA1 circuit
In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.
Cortex-dependent corrections as the mouse tongue reaches for and misses targets
Brendan Ito (Cornell University, USA) and Teja Bollu (Salk Institute, USA) share unique insights into rapid online motor corrections during mouse licking, analogous to primate goal-oriented reaching. Techniques covered include large-scale single unit recording during behaviour with optogenetics, and a deep-learning-based neural network to resolve 3D tongue kinematics during licking.
Sensing in Insect Wings
Ali Weber (University of Washington, USA) uses the the hawkmoth as a model system, to investigate how information from a small number of mechanoreceptors on the wings are used in flight control. She employs a combination of experimental and computational techniques to study how these sensors respond during flight and how one might optimally array a set of these sensors to best provide feedback during flight.
Network science and network medicine: New strategies for understanding and treating the biological basis of mental ill-health
The last twenty years have witnessed extraordinarily rapid progress in basic neuroscience, including breakthrough technologies such as optogenetics, and the collection of unprecedented amounts of neuroimaging, genetic and other data relevant to neuroscience and mental health. However, the translation of this progress into improved understanding of brain function and dysfunction has been comparatively slow. As a result, the development of therapeutics for mental health has stagnated too. One central challenge has been to extract meaning from these large, complex, multivariate datasets, which requires a shift towards systems-level mathematical and computational approaches. A second challenge has been reconciling different scales of investigation, from genes and molecules to cells, circuits, tissue, whole-brain, and ultimately behaviour. In this talk I will describe several strands of work using mathematical, statistical, and bioinformatic methods to bridge these gaps. Topics will include: using artificial neural networks to link the organization of large-scale brain connectivity to cognitive function; using multivariate statistical methods to link disease-related changes in brain networks to the underlying biological processes; and using network-based approaches to move from genetic insights towards drug discovey. Finally, I will discuss how simple organisms such as C. elegans can serve to inspire, test, and validate new methods and insights in networks neuroscience.
Experience-Dependent Transcription: From Genomic Mechanisms to Neural Circuit Function
Experience-dependent transcription is a key molecular mechanisms for regulating the development and plasticity of synapses and neural circuits and is thought to underlie cognitive functions such as perception, learning and memory. After two years of COVID-pandemic, the goal of this online conference is to allow investigators in the field to reconnect and to discuss their recent scientific findings.
Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics
Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.
How does the brain analyse sensory information and learns from it?
Introducing exciting methods that enable neuroscientists to look deep into the living brain, allowing us to study how the brain's neural networks learn and process sensory information.
A biological model system for studying predictive processing
Despite the increasing recognition of predictive processing in circuit neuroscience, little is known about how it may be implemented in cortical circuits. We set out to develop and characterise a biological model system with layer 5 pyramidal cells in the centre. We aim to gain access to prediction and internal model generating processes by controlling, understanding or monitoring everything else: the sensory environment, feed-forward and feed-back inputs, integrative properties, their spiking activity and output. I’ll show recent work from the lab establishing such a model system both in terms of biology as well as tool development.
Taming chaos in neural circuits
Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.
How does a neuron decide when and where to make a synapse?
Precise synaptic connectivity is a prerequisite for the function of neural circuits, yet individual neurons, taken out of their developmental context, readily form unspecific synapses. How does genetically encoded brain wiring deal with this apparent contradiction? Brain wiring is a developmental growth process that is not only characterized by precision, but also flexibility and robustness. As in any other growth process, cellular interactions are restricted in space and time. Correspondingly, molecular and cellular interactions are restricted to those that 'get to see' each other during development. This seminar will explore the question how neurons decide when and where to make synapses using the Drosophila visual system as a model. New findings reveal that pattern formation during growth and the kinetics of live neuronal interactions restrict synapse formation and partner choice for neurons that are not otherwise prevented from making incorrect synapses in this system. For example, cell biological mechanisms like autophagy as well as developmental temperature restrict inappropriate partner choice through a process of kinetic exclusion that critically contributes to wiring specificity. The seminar will explore these and other neuronal strategies when and where to make synapses during developmental growth that contribute to precise, flexible and robust outcomes in brain wiring.
Dissecting the neural circuits underlying prefrontal regulation of reward and threat responsivity in a primate
Gaining insight into the overlapping neural circuits that regulate positive and negative emotion is an important step towards understanding the heterogeneity in the aetiology of anxiety and depression and developing new treatment targets. Determining the core contributions of the functionally heterogenous prefrontal cortex to these circuits is especially illuminating given its marked dysregulation in affective disorders. This presentation will review a series of studies in a new world monkey, the common marmoset, employing pathway-specific chemogenetics, neuroimaging, neuropharmacology and behavioural and cardiovascular analysis to dissect out prefrontal involvement in the regulation of both positive and negative emotion. Highlights will include the profound shift of sensitivity away from reward and towards threat induced by localised activations within distinct regions of vmPFC, namely areas 25 and 14 as well as the opposing contributions of this region, compared to orbitofrontal and dorsolateral prefrontal cortex, in the overall responsivity to threat. Ongoing follow-up studies are identifying the distinct downstream pathways that mediate some of these effects as well as their differential sensitivity to rapidly acting anti-depressants.
New tools for monitoring and manipulating neural circuits
Dr. Looger will present updates on a variety of molecular tools for studying & manipulating neural circuits & other preparations. Topics include genetically encoded calcium indicators (including the new ultra-fast jGCaMP8 variants), neurotransmitter sensors (improved versions for following glutamate, GABA, acetylcholine, serotonin), optogenetic effectors including the new “enhanced Magnets” dimerizers, AAV serotypes for retrograde labeling & altered tropism, probes for correlative light-electron microscopy, chemical gene switches, etc. He will make all his slides freely available - so don’t worry about hurriedly taking notes; instead focus on questions and ideas for collaboration. Please bring your suggestions for molecular tools that would be transformative for the field.
Effects of pathological Tau on hippocampal neuronal activity and spatial memory in ageing mice
The gradual accumulation of hyperphosphorylated forms of the Tau protein (pTau) in the human brain correlate with cognitive dysfunction and neurodegeneration. I will present our recent findings on the consequences of human pTau aggregation in the hippocampal formation of a mouse tauopathy model. We show that pTau preferentially accumulates in deep-layer pyramidal neurons, leading to their neurodegeneration. In aged but not younger mice, pTau spreads to oligodendrocytes. During ‘goal-directed’ navigation, we detect fewer high-firing pyramidal cells, but coupling to network oscillations is maintained in the remaining cells. The firing patterns of individually recorded and labelled pyramidal and GABAergic neurons are similar in transgenic and non-transgenic mice, as are network oscillations, suggesting intact neuronal coordination. This is consistent with a lack of pTau in subcortical brain areas that provide rhythmic input to the cortex. Spatial memory tests reveal a reduction in short-term familiarity of spatial cues but unimpaired spatial working and reference memory. These results suggest that preserved subcortical network mechanisms compensate for the widespread pTau aggregation in the hippocampal formation. I will also briefly discuss ideas on the subcortical origins of spatial memory and the concept of the cortex as a monitoring device.
From natural scene statistics to multisensory integration: experiments, models and applications
To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.
Neural circuits for novel choices and for choice speed and accuracy changes in macaques
While most experimental tasks aim at isolating simple cognitive processes to study their neural bases, naturalistic behaviour is often complex and multidimensional. I will present two studies revealing previously uncharacterised neural circuits for decision-making in macaques. This was possible thanks to innovative experimental tasks eliciting sophisticated behaviour, bridging the human and non-human primate research traditions. Firstly, I will describe a specialised medial frontal circuit for novel choice in macaques. Traditionally, monkeys receive extensive training before neural data can be acquired, while a hallmark of human cognition is the ability to act in novel situations. I will show how this medial frontal circuit can combine the values of multiple attributes for each available novel item on-the-fly to enable efficient novel choices. This integration process is associated with a hexagonal symmetry pattern in the BOLD response, consistent with a grid-like representation of the space of all available options. We prove the causal role played by this circuit by showing that focussed transcranial ultrasound neuromodulation impairs optimal choice based on attribute integration and forces the subjects to default to a simpler heuristic decision strategy. Secondly, I will present an ongoing project addressing the neural mechanisms driving behaviour shifts during an evidence accumulation task that requires subjects to trade speed for accuracy. While perceptual decision-making in general has been thoroughly studied, both cognitively and neurally, the reasons why speed and/or accuracy are adjusted, and the associated neural mechanisms, have received little attention. We describe two orthogonal dimensions in which behaviour can vary (traditional speed-accuracy trade-off and efficiency) and we uncover independent neural circuits concerned with changes in strategy and fluctuations in the engagement level. The former involves the frontopolar cortex, while the latter is associated with the insula and a network of subcortical structures including the habenula.
Input and target-selective plasticity in sensory neocortex during learning
Behavioral experience shapes neural circuits, adding and subtracting connections between neurons that will ultimately control sensation and perception. We are using natural sensory experience to uncover basic principles of information processing in the cerebral cortex, with a focus on how sensory learning can selectively alter synaptic strength. I will discuss recent findings that differentiate reinforcement learning from sensory experience, showing rapid and selective plasticity of thalamic and inhibitory synapses within primary sensory cortex.
Synergy of color and motion vision for detecting approaching objects in Drosophila
I am working on color vision in Drosophila, identifying behaviors that involve color vision and understanding the neural circuits supporting them (Longden 2016). I have a long-term interest in understanding how neural computations operate reliably under changing circumstances, be they external changes in the sensory context, or internal changes of state such as hunger and locomotion. On internal state-modulation of sensory processing, I have shown how hunger alters visual motion processing in blowflies (Longden et al. 2014), and identified a role for octopamine in modulating motion vision during locomotion (Longden and Krapp 2009, 2010). On responses to external cues, I have shown how one kind of uncertainty in the motion of the visual scene is resolved by the fly (Saleem, Longden et al. 2012), and I have identified novel cells for processing translation-induced optic flow (Longden et al. 2017). I like working with colleagues who use different model systems, to get at principles of neural operation that might apply in many species (Ding et al. 2016, Dyakova et al. 2015). I like work motivated by computational principles - my background is computational neuroscience, with a PhD on models of memory formation in the hippocampus (Longden and Willshaw, 2007).
Choice History Bias As A Window Into Cognition And Neural Circuits
Deep inverse modeling reveals dynamic-dependent invariances in neural circuits mechanisms
Bernstein Conference 2024
Emergence of convolutional structure in neural circuits
COSYNE 2022
The smart image compression algorithm in the retina: recoding inputs in neural circuits
COSYNE 2022
The smart image compression algorithm in the retina: recoding inputs in neural circuits
COSYNE 2022
Controlled generation of functional human neural circuits
COSYNE 2023
Encoding priors in recurrent neural circuits with dendritic nonlinearities
COSYNE 2023
Controlling Gradient Dynamics for Improved Temporal Learning in Neural Circuits
COSYNE 2025
Discovering plasticity rules that organize and maintain neural circuits
COSYNE 2025
Symmetries and continuous attractors in disordered neural circuits
COSYNE 2025
A bistable inhibitory optoGPCR for multiplexed optogenetic control of neural circuits
FENS Forum 2024
Can dynamic causal modelling (DCM) identify multistable neural circuits for decision-making?
FENS Forum 2024
Modelling Dravet syndrome using human induced pluripotent stem cell (hiPSC)-derived neural circuits
FENS Forum 2024
Newly synthetic synaptic connector repairs neural circuits damaged by spinal cord injury: recovery from chronic spinal cord injury.
FENS Forum 2024
Structural remodeling of neural circuits via engineered neuro-glial interactions
FENS Forum 2024
Ultrafast two-photon all-optical interrogation of neural circuits with acousto-optic deflectors
FENS Forum 2024
Meta-Learning the Inductive Biases of Simple Neural Circuits
Neuromatch 5