Computational
computational neuroscience
Latest
SISSA Neuroscience department
The Neuroscience Department of the International School for Advanced Studies (SISSA; https://www.sissa.it/research/neuroscience) invites expressions of interest from scientists from various fields of Neuroscience for multiple tenure-track positions with anticipated start in 2025. Ongoing neuroscience research at SISSA includes cognitive neuroscience, computational and theoretical neuroscience, systems neuroscience, molecular and cellular research as well as genomics and genetics. The Department intends to potentiate its activities in these fields and to strengthen cross-field interactions. Expressions of interest from scientists in any of these fields are welcome. The working and teaching language of SISSA is English. This is an equal opportunity career initiative and we encourage applications from qualified women, racial and ethnic minorities, and persons with disabilities. Candidates should have a PhD in a relevant field and a proven record of research achievements. A clear potential to promote and lead research activities, and a specific interest in training and supervising PhD students is essential. Interested colleagues should present an original and innovative plan for their independent future research. We encourage both proposals within existing fields at SISSA as well as novel ideas outside of those or spanning various topics and methodologies of Neuroscience. SISSA is an international school promoting basic and applied research in Neuroscience, Mathematics and Physics and dedicated to the training of PhD students. Lab space and other resources will be commensurate with the appointment. Shared facilities include cell culture rooms, viral vector facilities, confocal microscopes, animal facilities, molecular and biochemical facilities, human cognition labs with EEG, TMS, and eye tracking systems, mechatronics workshop, and computing facilities. Agreements with national and international MRI scanning facilities are also in place. SISSA encourages fruitful exchanges between neuroscientists and other researchers including data scientists, physicists and mathematicians. Interested colleagues are invited to send a single pdf file including a full CV, a brief description of past and future research interests (up to 1,000 words), and the names of three referees to neuro.search@sissa.it. Selected candidates will be invited for an online or in-person seminar and 1- on-1 meetings in summer/autumn 2024. Deadline: A first evaluation round will consider all applications submitted before 15 May 2024. Later applications might be considered if no suitable candidates have been identified yet.
Prof. Amir Raz
Sleep expert with a Ph.D. degree in Neuroscience, Psychology, Biomedical Engineering or similar.
Dr. Demian Battaglia/Dr. Romain Goutagny
The postdoc position is under the joint co-mentoring of Dr. Demian Battaglia and Dr. Romain Goutagny at the University of Strasbourg, France, in the Functional System's Dynamics team – FunSy. The position starts as soon as possible and can last up to two years. The job offer is funded by the French ANR 'HippoComp' project, which focuses on the complexity of hippocampal oscillations and the hypothesis that such complexity can serve as a computational resource. The team performs electrophysiological recordings in the hippocampus and cortex during spatial navigation and memory tasks in mice (wild type and mutant developing various neuropathologies) and have access to vast data through local and international cooperation. They use a large spectrum of computational tools ranging from time-series and network analyses, information theory, and machine-learning to multi-scale computational modeling.
Bruno A. Olshausen
The Helen Wills Neuroscience Institute together with the Department of Statistics at UC Berkeley is conducting a faculty search in the area of computational or theoretical neuroscience. This is an ideal opportunity for computational/theoretical neuroscientists who are engaged in both model and theory development and collaborative work with experimentalists.
Professor Geoffrey J Goodhill
The Department of Neuroscience at Washington University School of Medicine is currently recruiting investigators with the passion to create knowledge, pursue bold visions, and challenge canonical thinking as we expand into our new 600,000 sq ft purpose-built neurosciences research building. We are now seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidates will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. We are particularly interested in outstanding researchers who are both creative and collaborative.
Burcu Ayşen Ürgen
Bilkent University invites applications for multiple open-rank faculty positions in the Department of Neuroscience. The department plans to expand research activities in certain focus areas and accordingly seeks applications from promising or established scholars who have worked in the following or related fields: Cellular/molecular/developmental neuroscience with a strong emphasis on research involving animal models. Systems/cognitive/computational neuroscience with a strong emphasis on research involving emerging data-driven approaches, including artificial intelligence, robotics, brain-machine interfaces, virtual reality, computational imaging, and theoretical modeling. Candidates with a research focus in those areas whose research has a neuroimaging component are particularly encouraged to apply. The Department’s interdisciplinary Graduate Program in Neuroscience that offers Master's and PhD degrees was established in 2014. The department is affiliated with Bilkent’s Aysel Sabuncu Brain Research Center (ASBAM) and the National Magnetic Resonance Research Center (UMRAM). Faculty affiliated with the department has the privilege to access state-of-the-art research facilities in these centers, including animal facilities, cellular/molecular laboratory infrastructure, psychophysics laboratories, eyetracking laboratories, EEG laboratories, a human-robot interaction laboratory, and two MRI scanners (3T and 1.5T).
Jorge Jaramillo
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. Appointees will join as Grossman Center Postdoctoral Fellows, with the freedom to work with any of its faculty members. We especially welcome applicants who develop computational models and machine learning analysis methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron, Jorge Jaramillo, and Ramon Nogueira. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Department of Neurobiology, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and the Data Science Institute. The Grossman Center is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.
Haim Sompolinsky, Kenneth Blum
The Swartz Program at Harvard University seeks applicants for a postdoctoral fellow in theoretical and computational neuroscience. Based on a grant from the Swartz Foundation, a Swartz postdoctoral fellowship is available at Harvard University with a start date in the summer or fall of 2024. Postdocs join a vibrant group of theoretical and experimental neuroscientists plus theorists in allied fields at Harvard’s Center for Brain Science. The Center for Brain Science includes faculty doing research on a wide variety of topics, including neural mechanisms of rodent learning, decision-making, and sex-specific and social behaviors; reinforcement learning in rodents and humans; human motor control; behavioral and fMRI studies of human cognition; circuit mechanisms of learning and behavior in worms, larval flies, and larval zebrafish; circuit mechanisms of individual differences in flies and humans; rodent and fly olfaction; inhibitory circuit development; retinal circuits; and large-scale reconstruction of detailed brain circuitry.
Lyle Muller
This position will involve collaboration between our laboratory and researchers with expertise in advanced methods of brain imaging (Mark Schnitzer, Stanford), neuroengineering (Duygu Kuzum, UCSD), theoretical neuroscience (Todd Coleman, Stanford), and neurophysiology of visual perception (John Reynolds, Salk Institute for Biological Studies). In collaboration with this multi-disciplinary team, this researcher will apply new signal processing techniques for multisite spatiotemporal data to understand cortical dynamics during visual perception. This project will also involve development of spiking network models to understand the mechanisms underlying observed activity patterns. The project may include intermittent travel between labs to present results and facilitate collaborative work.
Carsten Mehring
The interdisciplinary MSc program in Neuroscience at the University of Freiburg, Germany, provides theoretical and practical training in neuroscience, covering both the foundations and latest research in the field. It is taught by lecturers from an international scientific community from multiple faculties and neuroscience research centres. The modular course structure caters to the specific backgrounds and research interests of each individual student with specialisations in neural circuits and behavior, computational neuroscience and neurotechnology. All courses are taught in English.
N/A
The Neuroscience Department of the International School for Advanced Studies (SISSA) invites expressions of interest from scientists for multiple tenure-track positions in various fields of Neuroscience with anticipated start in 2025. The Department aims to enhance its activities in cognitive neuroscience, computational and theoretical neuroscience, systems neuroscience, molecular and cellular research, genomics, and genetics, and to strengthen cross-field interactions. The working and teaching language at SISSA is English. This is an equal opportunity career initiative.
N/A
The PostDoctoral researcher will conduct research activities in modelling and simulation of reward-modulated prosocial behavior and decision-making. The position is part of a larger effort to uncover the computational and mechanistic bases of prosociality and empathy at the behavioral and circuit levels. The role involves working at the interface between experimental data (animal behavior and electrophysiology) and theoretical modelling, with an emphasis on Multi-Agent Reinforcement Learning and neural population dynamics.
Paul Cisek
Doctoral studies in computational neuroscience, focusing on the neural mechanisms of embodied decision-making and action planning in humans and non-human primates. The research involves computational models of the nervous system integrated with behavioral experiments, transcranial magnetic stimulation, and multi-electrode recording in multiple regions of the cerebral cortex and basal ganglia. New projects will use virtual reality to study naturalistic behavior and develop theoretical models of distributed cortical and subcortical circuits.
Jorge Jaramillo
We are looking for an outstanding applicant to develop large-scale circuit models for decision making within a collaborative consortium that includes the Allen Institute for Neural Dynamics, New York University, and the University of Chicago. This ambitious NIH-funded project requires the creativity and expertise to integrate multimodal data sets (e.g., connectivity, large-scale neural recordings, behavior) into a comprehensive modeling framework. The successful applicant will join Jorge Jaramillo’s Distributed Neural Dynamics and Control Lab at the Grossman Center at the University of Chicago. Throughout the course of the postdoctoral training, there will be opportunities to visit the other sites in Seattle (Karel Svoboda) and New York (Adam Carter, Xiao-Jing Wang) for additional training and collaboration opportunities. Appointees will join as Grossman Center Postdoctoral Fellows at the University of Chicago and will have access to state-of-the-art facilities and additional opportunities for collaboration with exceptional experimental labs within the Department of Neurobiology, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and the Data Science Institute. Postdoctoral fellows will also have the possibility to work in additional projects with other Grossman Center faculty members.
Jorge Jaramillo
We are looking for an outstanding applicant to develop large-scale circuit models for decision making within a collaborative consortium that includes the Allen Institute for Neural Dynamics, New York University, and the University of Chicago. This ambitious NIH-funded project requires the creativity and expertise to integrate multimodal data sets (e.g., connectivity, large-scale neural recordings, behavior) into a comprehensive modeling framework. The successful applicant will join Jorge Jaramillo’s Distributed Neural Dynamics and Control Lab at the Grossman Center at the University of Chicago. Throughout the course of the postdoctoral training, there will be opportunities to visit the other sites in Seattle (Karel Svoboda) and New York (Adam Carter, Xiao-Jing Wang) for additional training and collaboration opportunities. Appointees will join as Grossman Center Postdoctoral Fellows at the University of Chicago and will have access to state-of-the-art facilities and additional opportunities for collaboration with exceptional experimental labs within the Department of Neurobiology, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and the Data Science Institute. Postdoctoral fellows will also have the possibility to work in additional projects with other Grossman Center faculty members.
Alberto Bacci
The successful candidate will work on inhibitory circuits of the prefrontal cortex of mice. In particular, they will study the properties and plasticity of synapses connecting a rich diversity of prefrontal cortical neuron subtypes. The candidate will also perform and analyze electrophysiological recordings in vivo, using high-density Neuropixels probes. This project is part of an ERA-Net NEURON international consortium and focuses on the rich diversity of GABAergic interneurons and their impact on the functional states of prefrontal cortical networks in healthy and diseased states.
Maximilian Riesenhuber, PhD
We have an opening for a postdoc position investigating the neural bases of deep multimodal learning in the brain. The project involves EEG and laminar 7T imaging (in collaboration with Dr. Peter Bandettini’s lab at NIMH) to test computational hypotheses for how the brain learns multimodal concept representations. Responsibilities of the postdoc include running EEG and fMRI experiments, data analysis and manuscript preparation. Georgetown University has a vibrant neuroscience community with over fifty labs participating in the Interdisciplinary Program in Neuroscience and a number of relevant research centers, including the new Center for Neuroengineering (cne.georgetown.edu). Interested candidates should submit a CV, a brief (1 page) statement of research interests, representative reprints, and the names and contact information of three references to Interfolio via https://apply.interfolio.com/148520. Faxed, emailed, or mailed applications will not be accepted. Questions about the position can be directed to Maximilian Riesenhuber (mr287@georgetown.edu).
Jean-Pascal Pfister
The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.
N/A
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. We especially welcome applicants who develop mathematical approaches, computational models, and machine learning methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron’s lab investigates how the cellular and synaptic circuitry of neuronal circuits supports the complex dynamics and computations that are routinely observed in the brain. Jorge Jaramillo’s lab investigates how subcortical structures interact with cortical circuits to subserve cognitive processes such as memory, attention, and decision making. Ramon Nogueira’s lab investigates the geometry of representations as the computational support of cognitive processes like abstraction in noisy artificial and biological neural networks. Marcella Noorman’s lab investigates how properties of synapses, neurons, and circuits shape the neural dynamics that enable flexible and efficient computation. Samuel Muscinelli’s lab studies how the anatomy of brain circuits both governs learning and adapts to it. We combine analytical theory, machine learning, and data analysis, in close collaboration with experimentalists. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Neuroscience Institute, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and UChicago’s Data Science Institute. The Neuroscience Institute is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.
Taro Toyoizumi, PhD
The RIKEN Center for Brain Science (CBS) was launched in April 2018 following the strong 20-year foundation of its predecessor, the Brain Science Institute (BSI). CBS aims to meet society’s ever-growing expectations for brain research. We are currently seeking outstanding neuroscientists for Team Leader positions (junior principal investigators). However, applications from internationally established neuroscientists may be considered. To promote diversity, a strength of CBS, we proactively recruit women when the candidate's research skills are deemed equal. At RIKEN CBS, Team Leaders have full intellectual independence, generous internal funds including a highly competitive start-up package and access to ample communal facilities in a collaborative environment. Successful candidates for the Team Leader position must have demonstrated the ability to develop an original, independent and internationally competitive research program. We encourage applications from all disciplines of neuroscience, particularly in (1) research areas of neurological/psychiatric disorders and (2) theoretical and computational neuroscience. Successful candidates will hold a research management position, and as the head of a laboratory, they will provide leadership and guidance to laboratory members to conduct research.
Professor Geoffrey J Goodhill
The Department of Neuroscience at Washington University School of Medicine is seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidate will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. The Department’s focus on fundamental neuroscience, outstanding research support facilities, and the depth, breadth and collegiality of our culture provide an exceptional environment to launch your independent research program.
Ann Kennedy
The Kennedy lab is recruiting for multiple funded postdoctoral positions in theoretical and computational neuroscience, following our recent lab move to Scripps Research in San Diego, CA! Ongoing projects in the lab span topics in: reservoir computing with heterogeneous cell types, reinforcement learning/control theory analysis of complex behavior, neuromechanical whole-organism modeling, diffusion models for imitation learning/forecasting of mouse social interactions, joint analysis/modeling of effects of internal states on neural + vocalization + behavior data. With additional NIH and foundation funding for: characterizing progression of behavioral phenotypes in Parkinson’s, modeling cellular/circuit mechanisms underlying internal state-dependent changes in neural population dynamics, characterizing neural correlates of social relationships across species. Projects are flexible and can be tailored to applicants’ research and training goals, and there are abundant opportunities for new collaboration with local experimental groups. San Diego has a fantastic research community and very high quality of life. Our campus is located at the Pacific coast, at the northern edge of UCSD and not far from the Salk Institute. Postdoctoral stipends are well above NIH guidelines and include a relocation bonus, with research professorship positions available for qualified applicants.
Convergent large-scale network and local vulnerabilities underlie brain atrophy across Parkinson’s disease stages
AutoMIND: Deep inverse models for revealing neural circuit invariances
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia
Brain Emulation Challenge Workshop
Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.
Predicting traveling waves: a new mathematical technique to link the structure of a network to the specific patterns of neural activity
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data
There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions
Understanding how macroscale brain dynamics are shaped by microscale mechanisms is crucial in neuroscience. We investigate this relationship in animal models by directly manipulating cellular properties and measuring whole-brain responses using resting-state fMRI. Specifically, we explore the impact of chemogenetically neuromodulating D1 medium spiny neurons in the dorsomedial caudate putamen (CPdm) on BOLD dynamics within a striato-thalamo-cortical circuit in mice. Our findings indicate that CPdm neuromodulation alters BOLD dynamics in thalamic subregions projecting to the dorsomedial striatum, influencing both local and inter-regional connectivity in cortical areas. This study contributes to understanding structure–function relationships in shaping inter-regional communication between subcortical and cortical levels.
Tracking subjects' strategies in behavioural choice experiments at trial resolution
Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.
Bio-realistic multiscale modeling of cortical circuits
A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.
Diffuse coupling in the brain - A temperature dial for computation
The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of largescale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.
Brain Connectivity Workshop
Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.
Cognitive Computational Neuroscience 2023
CCN is an annual conference that serves as a forum for cognitive science, neuroscience, and artificial intelligence researchers dedicated to understanding the computations that underlie complex behavior.
Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing
The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Mapping learning and decision-making algorithms onto brain circuitry
In the first half of my talk, I will discuss our recent work on the midbrain dopamine system. The hypothesis that midbrain dopamine neurons broadcast an error signal for the prediction of reward is among the great successes of computational neuroscience. However, our recent results contradict a core aspect of this theory: that the neurons uniformly convey a scalar, global signal. I will review this work, as well as our new efforts to update models of the neural basis of reinforcement learning with our data. In the second half of my talk, I will discuss our recent findings of state-dependent decision-making mechanisms in the striatum.
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
ISAM-NIG Webinars
Optimized Non-Invasive Brain Stimulation for Addiction Treatment
Invariant neural subspaces maintained by feedback modulation
This session is a double feature of the Cologne Theoretical Neuroscience Forum and the Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience of the Jülich Research Center.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The evolution of computation in the brain: Insights from studying the retina
The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Synergy of color and motion vision for detecting approaching objects in Drosophila
I am working on color vision in Drosophila, identifying behaviors that involve color vision and understanding the neural circuits supporting them (Longden 2016). I have a long-term interest in understanding how neural computations operate reliably under changing circumstances, be they external changes in the sensory context, or internal changes of state such as hunger and locomotion. On internal state-modulation of sensory processing, I have shown how hunger alters visual motion processing in blowflies (Longden et al. 2014), and identified a role for octopamine in modulating motion vision during locomotion (Longden and Krapp 2009, 2010). On responses to external cues, I have shown how one kind of uncertainty in the motion of the visual scene is resolved by the fly (Saleem, Longden et al. 2012), and I have identified novel cells for processing translation-induced optic flow (Longden et al. 2017). I like working with colleagues who use different model systems, to get at principles of neural operation that might apply in many species (Ding et al. 2016, Dyakova et al. 2015). I like work motivated by computational principles - my background is computational neuroscience, with a PhD on models of memory formation in the hippocampus (Longden and Willshaw, 2007).
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli
Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics
Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.
NMC4 Panel: The Contribution of Models vs Data
NMC4 Panel: NMC Around the Globe
For the first time, we are holding a NMC around the globe session, a panel of computational neuroscientists working in different continents who are willing to discuss their challenges and milestones in doing science and training researchers in their home country. We hope that our panelists can share their barriers, what they define as accomplishments and how they would like the future of computational neuroscience to evolve locally and internationally with our diverse NMC audience.
Spontaneous activity competes with externally evoked responses in sensory cortex
The interaction between spontaneously and externally evoked neuronal activity is fundamental for a functional brain. Increasing evidence suggests that bursts of high-power oscillations in the 15-30 Hz beta-band represent activation of resting state networks and can mask perception of external cues. Yet demonstration of the effect of beta power modulation on perception in real-time is missing, and little is known about the underlying mechanism. In this talk I will present the methods we developed to fill this gap together with our recent results. We used a closed-loop stimulus-intensity adjustment system based on online burst-occupancy analyses in rats involved in a forepaw vibrotactile detection task. We found that the masking influence of burst-occupancy on perception can be counterbalanced in real-time by adjusting the vibration amplitude. Offline analysis of firing-rates and local field potentials across cortical layers and frequency bands confirmed that beta-power in the somatosensory cortex anticorrelated with sensory evoked responses. Mechanistically, bursts in all bands were accompanied by transient synchronization of cell assemblies, but only beta-bursts were followed by a reduction of firing-rate. Our closed loop approach reveals that spontaneous beta-bursts reflect a dynamic state that competes with external stimuli.
computational neuroscience coverage
72 items
Explore how computational neuroscience research is advancing inside Neuro.
Visit domain