TopicWorld Wide

computational neuroscience

60 Seminars25 Positions3 Conferences2 ePosters

Pick a domain context

This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.

Position

IMPRS for Brain & Behavior

research center caesar
Bonn, Germany
Jan 14, 2026

Apply to our fully funded, international PhD program in the Max Planck Society! IMPRS for Brain & Behavior is a PhD program in Bonn, Germany that offers a competitive world-class PhD training and research program in the field of neuroethology. IMPRS for Brain & Behavior is a collaboration between research center caesar (a neuroethology institute of the Max Planck Society), the University of Bonn, and the German Center for Neurodegenerative Disease (DZNE) in Bonn. The Projects 20 labs with an enormous variety of research projects are seeking outstanding PhD candidates to join their research. See our website (https://imprs-brain-behavior.mpg.de/faculty_members) for further information on our faculty and possible doctoral projects. Successful candidates will work in a young and dynamic, interdisciplinary, international environment, embedded in the local scientific communities in Bonn, Germany.

PositionComputational Neuroscience

Prof. John Murray

Yale University
New Haven, CT, USA
Jan 14, 2026

The Swartz Program for Theoretical Neuroscience at Yale University invites applications for up to two postdoctoral positions in Theoretical and Computational Neuroscience, with flexible start date in 2022. Competitive candidates include those with a strong quantitative background who wish to gain neuroscience research experience. We especially encourage candidates with an interest in collaborating directly with experimental neuroscientists. The candidates will be expected to perform theoretical/computational studies relevant to one or more laboratories of the Swartz Program at Yale and will be encouraged to participate in an expanding quantitative biology environment at Yale. More details here: https://neurojobs.sfn.org/job/31363/postdoctoral-swartz-fellowship-positions-in-theoretical-and-computational-neuroscience-at-yale/

PositionComputational Neuroscience

Faculty of Engineering, University of Bristol

University of Bristol
Bristol, United Kingdom
Jan 14, 2026

The Department of Computer Science is seeking to appoint a Lecturer/Senior Lecturer in the area of computational neuroscience, to join the Neural Computation Research Group. The appointee will be expected to take an active role in providing high quality and innovative teaching and perform internationally-leading research. Interest in developing innovative ways of integrating teaching, research and technology development will be especially welcomed.

Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste
Jan 14, 2026

Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2025. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, neuronal foundations of perceptual experience and learning in various sensory modalities, motivation and intelligence, language, and neural computation. Our research is highly interdisciplinary; our approaches include behavioral, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline for the spring admission round is 20 March 2025 at 1pm CET. Please apply here, and see the admission procedure page for more information. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.

PositionNeuroscience

Prof. Amir Raz

Chapman University Brain Institute
Irvine, CA, USA
Jan 14, 2026

Sleep expert with a Ph.D. degree in Neuroscience, Psychology, Biomedical Engineering or similar.

PositionComputational Neuroscience

Jim Magnuson

Basque Center on Cognition, Brain and Language
San Sebastián, Spain
Jan 14, 2026

3-year Ph.D. project, funded by la Caixa Foundation fellowship. Theme: Computational and neural bases of bilingualism. Goal: develop a model of bilingual development in the complementary learning systems framework. Direct link to position: https://finder.lacaixafellowships.org/finder?position=4739 Detailed Description: We seek a Ph.D. student with strong background (and masters) in a relevant domain (a cognitive, biological, or engineering field) and some experience with programming, data science, or computational modeling. The successful candidate will be involved in developing and computational models and/or running behavioral and neuroimaging studies, collecting and analyzing data, and disseminating the results in scientific conferences (presentations/posters) and peer-reviewed journals. The selected candidate will develop advanced technical and analytical skills and will have the opportunity to develop original experiments under the supervisors’ guidance. Applicants should demonstrate a keen interest in the key areas of cognitive neuroscience that are relevant for the research, coupled with strong computational skills (e.g., Python, Matlab, R). Experience with neuroscience techniques (e.g., MEG, EEG, MRI) and with analysis of neuroimaging data is desirable but not essential. A committed motivation to learning computational modelling and advanced analysis tools is a must, as well as the ability to acquire new skills and knowledge, and to work both independently and as part of a multidisciplinary team. A good command of English (the working language of the BCBL) is required; knowledge of Spanish and/or Basque is an advantage but not required. The candidate will enrol as a PhD student at the University of the Basque Country (UPV/EHU) and is expected to complete the PhD programme within 36 months. Training in complementary skills will be provided during the fellowship, including communication and research dissemination, IT and programming skills, ethics and professional conduct. The BCBL also provides support with living and welfare issues.

PositionComputational Neuroscience

Dr. Jorge Mejias

University of Amsterdam
Amsterdam, the Netherlands
Jan 14, 2026

The Computational Neuroscience Lab, recently established within the Cognitive and Systems Neuroscience Group at the University of Amsterdam (UvA), is seeking a highly qualified and motivated candidate for a postdoctoral position in computational neuroscience, under the project 'Translational biomarkers for compulsivity across large-scale brain networks'. The aim of this project is to understand the neurobiological roots of compulsivity, by identifying the neural signatures of compulsive behavior in cortical and subcortical brain regions. A combination of experimental and computational work will be used, with the presently advertised position being associated with the computational modeling part. You will develop and analyze computational models of large-scale brain networks of rodents and humans, following previous work in macaques (Mejias et al., Science Advances 2016). These new models will explicitly replicate neural dynamics underlying compulsive behavior, and will be constrained by existing anatomical, electrophysiological and clinical data from the experimental partners of the project. You will be supervised by Dr. Jorge Mejias, head of the Computational Neuroscience Lab, and the work will be carried out in close collaboration with Drs. Ingo Willuhn and Tara Arbab, from the Netherlands Institute for Neuroscience. You will also closely collaborate with other computational neuroscientists, experimental neuroscientists, clinicians, theoreticians, and machine learning experts at the UvA. You are expected: -to perform research on computational neuroscience;-to review relevant literature and acquire knowledge on neurobiology, compulsivity and computational neuroscience; -to build biologically realistic multi-area computational models of cortical circuits, and compare their predictions with experimental findings; -to collaborate and discuss regularly with other researchers in the project; -to take part in teaching efforts of the Computational Neuroscience Lab, including supervision of bachelor and Master students; -to write scientific manuscripts and present your results at meetings and conferences. Our offer: A temporary contract for 38 hours a week, preferably starting on 1 November 2021. The duration of the contract is 18 months (with a two months probation period). An extension of the contract is possible provided a positive performance of the candidate and further availability of funds. The salary, depending on relevant work experience before the beginning of the employment contract, will be €2,836 to €4,474 (scale 10) gross per month, based on a full-time contract (38 hours a week). This is exclusive 8% holiday allowance and 8.3% end-of-year bonus. A favorable tax agreement, the ‘30% ruling’, may apply to non-Dutch applicants. The Collective Labor Agreement of Dutch Universities is applicable.

Position

Dr. Scott Rich

Krembil Brain Institute
Toronto, Ontario
Jan 14, 2026

The Neuron to Brain Lab is recruiting a Master’s student to contribute to our computational investigation of the role of heterogeneity in seizure resilience. This project will be directly mentored by Dr. Scott Rich, a senior postdoc under the supervision of Dr. Taufik Valiante and leader of the lab’s Computational Pillar. The project will focus on constructing a cortical neural network containing multiple populations of inhibitory interneurons, and using this network to assess how heterogeneity amongst inhibitory cells might uniquely contribute to seizure resilience. This project will utilize the lab’s unique access to electrophysiological data from live human cortical tissue to constrain neuron models, as well as a wealth of collaborations between the lab and other computational neuroscientists at the Krembil Brain Institute and the Krembil Centre for Neuroinformatics.

Position

Rava Azeredo da Silveira

ENS, Paris and IOB, University of Basel
Paris (France) and/or Basel (Switzerland)
Jan 14, 2026

Several postdoctoral openings in the lab of Rava Azeredo da Silveira (Paris & Basel) The lab of Rava Azeredo da Silveira invites applications for Postdoctoral Researcher positions at ENS, Paris, and IOB, an associated institute of the University of Basel. Research questions will be chosen from a broad range of topics in theoretical/computational neuroscience and cognitive science (see the description of the lab’s activity, below). One of the postdoc positions to be filled in Basel will be part of a collaborative framework with Michael Woodford (Columbia University) and will involve projects relating the study of decision making to models of perception and memory. Candidates with backgrounds in mathematics, statistics, artificial intelligence, physics, computer science, engineering, biology, and psychology are welcome. Experience with data analysis and proficiency with numerical methods, in addition to familiarity with neuroscience topics and mathematical and statistical methods, are desirable. Equally desirable are a spirit of intellectual adventure, eagerness, and drive. The positions will come with highly competitive work conditions and salaries. Application deadline: Applications will be reviewed starting on 1 November 2020. How to apply: Please send the following information in one single PDF, to silveira@iob.ch: 1. letter of motivation; 2. statement of research interests, limited to two pages; 3. curriculum vitæ including a list of publications; 4. any relevant publications that you wish to showcase. In addition, please arrange for three letters of recommendations to be sent to the same email address. In all email correspondence, please include the mention “APPLICATION-POSTDOC” in the subject header, otherwise the application will not be considered. * ENS, together with a number of neighboring institutions (College de France, Institut Curie, ESPCI, Sorbonne Université, and Institut Pasteur), offers a rich scientific and intellectual environment, with a strong representation in computational neuroscience and related fields. * IOB is a research institute combining basic and clinical research. Its mission is to drive innovations in understanding vision and its diseases and develop new therapies for vision loss. IOB is an equal-opportunity employer with family-friendly work policies. * The Silveira Lab focuses on a range of topics, which, however, are tied together through a central question: How does the brain represent and manipulate information? Among the more concrete approaches to this question, the lab analyses and models neural activity in circuits that can be identified, recorded from, and perturbed experimentally, such as visual neural circuits in the retina and the cortex. Establishing links between physiological specificity and the structure of neural activity yields an understanding of circuits as building blocks of cerebral information processing. On a more abstract level, the lab investigates the representation of information in populations of neurons, from a statistical and algorithmic—rather than mechanistic—point of view, through theories of coding and data analyses. These studies aim at understanding the statistical nature of high-dimensional neural activity in different conditions, and how this serves to encode and process information from the sensory world. In the context of cognitive studies, the lab investigates mental processes such as inference, learning, and decision-making, through both theoretical developments and behavioral experiments. A particular focus is the study of neural constraints and limitations and, further, their impact on mental processes. Neural limitations impinge on the structure and variability of mental representations, which in turn inform the cognitive algorithms that produce behavior. The lab explores the nature of neural limitations, mental representations, and cognitive algorithms, and their interrelations.

Position

SueYeon Chung, Center for Computational Neuroscience, Flatiron Institute

Center for Computational Neuroscience, Flatiron Institute, Simons Foundation
New York, New York
Jan 14, 2026

Flatiron Research Fellow (Postdoctoral Fellow), NeuroAI and Geometric Data Analysis Description Applications are invited for Flatiron Research Fellowships (FRF) in the NeuroAI and Geometric Data Analysis Group (SueYeon Chung, PI) at the Center for Computational Neuroscience at the Flatiron Institute of the Simons Foundation, whose focus is on understanding computation in the brain and artificial neural networks by: (1) analyzing geometries underlying neural or feature representations, embedding and transferring information, and (2) developing neural network models and learning rules guided by neuroscience. To do this, the group utilizes analytical methods from statistical physics, machine learning theory, and high-dimensional statistics and geometry.The CCN FRF program offers the opportunity for postdoctoral research in areas that have strong synergy with one or more of the existing research groups at CCN or other centers at the Flatiron Institute. In addition to carrying out an independent research program, Flatiron Research Fellows are expected to: disseminate their results through scientific presentations, publications, and software release, collaborate with other members of the CCN or Flatiron Institute, and participate in the scientific life of the CCN and Flatiron Institute by attending seminars, colloquia, and group meetings. Flatiron Research Fellows may have the opportunity to organize workshops and to mentor graduate and undergraduate students. The mission of CCN is to develop theories, models, and computational methods that deepen our knowledge of brain function — both in health and in disease. CCN takes a “systems" neuroscience approach, building models that are motivated by fundamental principles, that are constrained by properties of neural circuits and responses, and that provide insights into perception, cognition and behavior. This cross-disciplinary approach not only leads to the design of new model-driven scientific experiments, but also encapsulates current functional descriptions of the brain that can spur the development of new engineered computational systems, especially in the realm of machine learning. CCN’s current research groups include computational vision (Eero Simoncelli, PI), neural circuits and algorithms (Dmitri ‘Mitya’ Chklovskii, PI), neuroAI and geometric data analysis (SueYeon Chung, PI), and statistical analysis of neural data (Alex Williams, PI), and is planning to expand the number of research groups in the near term. Interested candidates should review the CCN public website for specific information on CCN’s research areas. Applicants who are interested in a joint appointment between two CCN research groups should submit the same application to both groups, noting the dual application in their research statement. Please note that Alex William’s statistical analysis of neural data group is not recruiting at CCN in 2023. FRF positions are two-year appointments and are generally renewed for a third year, contingent on performance. FRF receive a research budget and have access to the Flatiron Institute’s powerful scientific computing resources. FRF may be eligible for subsidized housing within walking distance of the CCN. Review of applications for positions starting between July and October 2024 will begin in November 2023. For more information about life at the Flatiron Institute, visit https://www.simonsfoundation.org/flatiron/careers.

PositionComputational Neuroscience

Center for Computational Neuroscience

Flatiron Institute, Simons Foundation
New York, New York
Jan 14, 2026

POSITION SUMMARY Applications are invited for Flatiron Research Fellowships (FRF) at the Center for Computational Neuroscience. The CCN FRF program offers the opportunity for postdoctoral research in areas that have strong synergy with one or more of the existing research groups at CCN or other centers at the Flatiron Institute. CCN FRF’s will be assigned a primary mentor from a CCN research group or project, though affiliations and collaborations with other research groups within CCN and throughout the Flatiron Institute are encouraged. In addition to carrying out an independent research program, Flatiron Research Fellows are expected to: disseminate their results through scientific presentations, publications, and software release, collaborate with other members of the CCN or Flatiron Institute, and participate in the scientific life of the CCN and Flatiron Institute by attending seminars, colloquia, and group meetings. Flatiron Research Fellows may have the opportunity to organize workshops and to mentor graduate and undergraduate students. The mission of CCN is to develop theories, models, and computational methods that deepen our knowledge of brain function — both in health and in disease. CCN takes a “systems" neuroscience approach, building models that are motivated by fundamental principles, that are constrained by properties of neural circuits and responses, and that provide insights into perception, cognition and behavior. This cross-disciplinary approach not only leads to the design of new model-driven scientific experiments, but also encapsulates current functional descriptions of the brain that can spur the development of new engineered computational systems, especially in the realm of machine learning. CCN currently has research groups in computational vision, neural circuits and algorithms, neuroAI and geometry, and statistical analysis of neural data; interested candidates should review the CCN public website for specific information on CCN’s research areas. Review of applications for positions starting between July and October 2022 will begin in mid-January 2022. Application Materials Cover letter (optional); Curriculum Vitae with bibliography; Research statement of no more than three pages describing past work and proposed research program. Applicants are encouraged to discuss the broad impact of the past and proposed research on computational neuroscience. Applicants should also indicate the primary CCN group(s) with which they’d seek to conduct research, and any desired affiliation with other Flatiron Centers. Three (3) letters of recommendation submitted confidentially by direct email to ccnjobs@simonsfoundation.org Selection Criteria: Applicants must have a PhD in a related field or expect to receive their PhD before the start of the appointment. Applications will be evaluated based on 1) past research accomplishments 2) proposed research program 3) synergy of applicant’s expertise and research proposal topic with existing CCN staff and research programs. Education PhD in computational neuroscience or a relevant technical field such as electrical engineering, machine learning, statistics, physics, or applied math. Related Skills Flexible multi-disciplinary mindset; Strong interest and experience in the scientific study of the brain; Demonstrated abilities in analysis, software and algorithm development, modeling and/or scientific simulation; Ability to do original and outstanding research in neuroscience; Ability to work well independently as well as in a collaborative team environment. FRF positions are two-year appointments and are generally renewed for a third year, contingent on performance. FRF receive a research budget and have access to the Flatiron Institute’s powerful scientific computing resources. FRF may be eligible for subsidized housing within walking distance of the CCN. THE SIMONS FOUNDATION'S DIVERSITY COMMITMENT Many of the greatest ideas and discoveries come from a diverse mix of minds, backgrounds and experiences, and we are committed to cultivating an inclusive work environment. The Simons Foundation actively seeks a diverse applicant pool and encourages candidates of all backgrounds to apply. We provide equal opportunities to all employees and applicants for employment without regard to race, religion, color, age, sex, national origin, sexual orientation, gender identity, genetic disposition, neurodiversity, disability, veteran status or any other protected category under federal, state and local law.

PositionComputational Neuroscience

Dr. Jorge Mejias

University of Amsterdam
Amsterdam
Jan 14, 2026

The Cognitive and Systems Neuroscience Group is seeking a highly qualified and motivated candidate for a doctoral position in computational neuroscience, under the recently acquired NWA-ORC Consortium grant. The aim of this Consortium is to understand the fundamental principles used by our brains to integrate information in noisy environments and uncertain conditions, and then implement those principles in next-generation algorithms for safe autonomous mobility. Within the Consortium, the main objective of the present PhD project is to develop a biologically realistic computational model of multi-area brain circuits involved in multisensory perception under uncertainty. The model will be constrained by state-of-the-art neuroanatomical data (such as realistic brain connectivity and multiple cell types), and we will identify and study biological aspects of the model which contribute to an optimal integration of sensory information (following Bayesian and other principles). Model predictions will then be compared to experimental data from collaborators. The project will be supervised by Dr. Jorge Mejias, head of the Computational Neuroscience Lab, and Prof. Dr. Cyriel Pennartz, head of the Cognitive & Systems Neuroscience group. The candidate will also closely collaborate with other computational neuroscientists, experimental neuroscientists, theoreticians and machine learning experts. You are expected: -to perform research of multisensory integration and perception using computational neuroscience methods; -to review relevant literature and acquire knowledge on neurobiology, perception and computational neuroscience; -to build biologically realistic multi-area computer models of cortical circuits for multisensory perception, and compare their predictions with experimental findings; -to collaborate with other groups in the Consortium; -to take part in the teaching effort of the group, including supervision of bachelor and master students; -to write scientific manuscripts and a PhD thesis. Our offer: A temporary contract for 38 hours per week for the duration of four years (the initial contract will be for a period of 18 months and after satisfactory evaluation it will be extended to a total duration of four years). This should lead to a dissertation (PhD thesis). We will draft an educational plan that includes attendance of courses and (international) meetings. We also expect you to assist in teaching undergraduates and master students. Based on a full-time appointment (38 hours per week) the gross monthly salary will range from €2,434 in the first year to €3,111 (scale P) in the last year. This is exclusive 8% holiday allowance and 8.3% end-of-year bonus. A favourable tax agreement, the ‘30% ruling’, may apply to non-Dutch applicants. The Collective Labour Agreement of Dutch Universities is applicable.

Position

Max Garagnani

Department of Computing, Goldsmiths, University of London
Goldsmiths, University of London, Lewisham Way, New Cross, London SE14 6NW, UK
Jan 14, 2026

The project involves implementing a brain-realistic neurocomputational model able to exhibit the spontaneous emergence of cognitive function from a uniform neural substrate, as a result of unsupervised, biologically realistic learning. Specifically, it will focus on modelling the emergence of unexpected (i.e., non stimulus-driven) action decisions using neo-Hebbian reinforcement learning. The final deliverable will be an artificial brain-like cognitive architecture able to learn to act as humans do when driven by intrinsic motivation and spontaneous, exploratory behaviour.

Position

N/A

INCF
Stockholm, SE
Jan 14, 2026

The role includes managing INCF's scientific committees & councils, developing communications materials, maintaining training & education content, maintaining updates on working group activities, managing mentorship programs, and assisting with INCF events. Candidates should be highly organized and service-minded with excellent written and spoken English. We are looking for a self-motivated and independent neuroscientist, computer scientist, or data scientist, preferably with experience in community engagement, open science practices, and scientific communications. The candidate should have strong time management skills and be able to multitask. Interpersonal skills are essential and we emphasize the person’s ability to contribute to a friendly work environment.

Position

N/A

Saarland University, the Max Planck Institute for Informatics, the Max Planck Institute for Software Systems, the CISPA Helmholtz Center for Information Security, and the German Research Center for Artificial Intelligence (DFKI)
Saarbrücken, Germany
Jan 14, 2026

The Research Training Group 2853 “Neuroexplicit Models of Language, Vision, and Action” is looking for 3 PhD students and 1 postdoc. Neuroexplicit models combine neural and human-interpretable (“explicit”) models in order to overcome the limitations that each model class has separately. They include neurosymbolic models, which combine neural and symbolic models, but also e.g. combinations of neural and physics-based models. In the RTG, we will improve the state of the art in natural language processing (“Language”), computer vision (“Vision”), and planning and reinforcement learning (“Action”) through the use of neuroexplicit models and investigate the cross-cutting design principles of effective neuroexplicit models (“Foundations”).

PositionComputational Neuroscience

Friedemann Zenke

Friedrich Miescher Institute
Basel, Switzerland
Jan 14, 2026

The position involves conducting research in computational neuroscience and bio-inspired machine intelligence, writing research articles and presenting them at international conferences, publishing in neuroscience journals and machine learning venues such as ICML, NeurIPS, ICLR, etc., and interacting and collaborating with experimental neuroscience groups or neuromorphic hardware developers nationally and internationally.

Position

Xavier Hinaut

Inria Bordeaux & Institute for Neurodegenerative diseases
Inria Bordeaux & Institute for Neurodegenerative diseases (Pellegrin Hospital Campus, Bordeaux)
Jan 14, 2026

This PhD thesis is part of the BrainGPT 'Inria Exploratory Action' project. The main ambition of the BrainGPT project is to combine the explainability of mechanistic models with the predictive power of Transformers to analyze brain imaging data. The thesis will mainly consist of developing new bio-inspired models inspired by the mechanisms, learning methods, and emerging behaviors of Large Language Models (LLMs) and Transformers. These models will be tested to assess their ability to predict brain activity from imaging data.

PositionNeuroscience

Dr. Demian Battaglia/Dr. Romain Goutagny

University of Strasbourg, Functional System's Dynamics team – FunSy
University of Strasbourg, France
Jan 14, 2026

The postdoc position is under the joint co-mentoring of Dr. Demian Battaglia and Dr. Romain Goutagny at the University of Strasbourg, France, in the Functional System's Dynamics team – FunSy. The position starts as soon as possible and can last up to two years. The job offer is funded by the French ANR 'HippoComp' project, which focuses on the complexity of hippocampal oscillations and the hypothesis that such complexity can serve as a computational resource. The team performs electrophysiological recordings in the hippocampus and cortex during spatial navigation and memory tasks in mice (wild type and mutant developing various neuropathologies) and have access to vast data through local and international cooperation. They use a large spectrum of computational tools ranging from time-series and network analyses, information theory, and machine-learning to multi-scale computational modeling.

PositionComputational Neuroscience

Sam Neymotin

Nathan Kline Institute (NKI) for Psychiatric Research
N/A
Jan 14, 2026

Postdoctoral scientist positions are available at the Nathan Kline Institute (NKI) for Psychiatric Research to work on computational neuroscience research funded by NIH and DoD grants. Applicants should have a PhD in computational neuroscience (or a related field), strong background in multiscale modeling using NEURON/NetPyNE, Python software development, neural/electrophysiology data analysis, machine learning, and writing/presenting research.

Position

Joseph Lizier

The University of Sydney, Brain and Mind Centre, School of Computer Science, School of Physics, Centre for Complex Systems
The University of Sydney
Jan 14, 2026

The successful candidates will join a dynamic interdisciplinary collaboration between A/Prof Mac Shine (Brain and Mind Centre), A/Prof Joseph Lizier (School of Computer Science) and Dr Ben Fulcher (School of Physics), within the University's Centre for Complex Systems, focused on advancing our understanding of brain function and cognition using cutting-edge computational and neuroimaging techniques at the intersection of network neuroscience, dynamical systems and information theory. The positions are funded by a grant from the Australian Research Council 'Evaluating the Network Neuroscience of Human Cognition to Improve AI'.

PositionNeuroscience

Bruno A. Olshausen

Helen Wills Neuroscience Institute, Department of Statistics at UC Berkeley
Berkeley, CA
Jan 14, 2026

The Helen Wills Neuroscience Institute together with the Department of Statistics at UC Berkeley is conducting a faculty search in the area of computational or theoretical neuroscience. This is an ideal opportunity for computational/theoretical neuroscientists who are engaged in both model and theory development and collaborative work with experimentalists.

Position

Prof Mario Dipoppa

UCLA
Los Angeles, USA
Jan 14, 2026

We are looking for candidates with a keen interest in gaining research experience in Computational Neuroscience, pursuing their own projects, and supporting those of other team members. Candidates should have a bachelor's or master's degree in a quantitative discipline and strong programming skills, ideally in Python. Candidates interested in joining the laboratory as research associates should send a CV, a research statement describing past research and career goals (max. one page), and contact information for two academic referees. The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code. Our research team, the Theoretical and Computational Neuroscience Laboratory, is on the main UCLA campus and enjoys close collaborations with the world-class neuroscience community there. The lab, led by Mario Dipoppa, is a cooperative and vibrant environment where all members are offered excellent scientific training and career mentoring. We strongly encourage candidates to apply early as applications will be reviewed until the positions are filled. The positions are available immediately with a flexible starting date. Please submit the application material as a single PDF file with your full name in the file name to mdipoppa@g.ucla.edu. Informal inquiries are welcome. For more details visit www.dipoppalab.com.

Position

Jian Liu

University of Leicester
Leicester, UK
Jan 14, 2026

Three PhD students funded by BBSRC MIBTP. Please find more information on https://sites.google.com/site/jiankliu/join-us 1. Towards a functional model for associative learning and memory formation Drs Jian Liu and Rodrigo Quian Quiroga, CSN/NPB, University of Leicester 2. Neuronal coupling across spatiotemporal scales and dimensions of cortical population activity Drs Michael Okun and Jian Liu, CSN/NPB, University of Leicester 3. Decoding movement from single neurons in motor cortex and their subcortical targets Drs Todor Gerdjikov and Jian Liu, CSN/NPB, University of Leicester

SeminarNeuroscience

Convergent large-scale network and local vulnerabilities underlie brain atrophy across Parkinson’s disease stages

Andrew Vo
Montreal Neurological Institute, McGill University
Nov 6, 2025
SeminarNeuroscience

AutoMIND: Deep inverse models for revealing neural circuit invariances

Richard Gao
Goethe University
Oct 2, 2025
SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
Jul 9, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
May 14, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia

Alban Elias Voppel
McGill University
May 1, 2025
SeminarNeuroscienceRecording

Brain Emulation Challenge Workshop

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Feb 21, 2025

Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.

SeminarNeuroscience

Predicting traveling waves: a new mathematical technique to link the structure of a network to the specific patterns of neural activity

Roberto Budzinski
Western University
Feb 6, 2025
SeminarOpen SourceRecording

Towards open meta-research in neuroimaging

Kendra Oudyk
ORIGAMI - Neural data science - https://neurodatascience.github.io/
Dec 9, 2024

When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/

SeminarNeuroscience

The Brain Prize winners' webinar

Larry Abbott, Haim Sompolinsky, Terry Sejnowski
Columbia University; Harvard University / Hebrew University; Salk Institute
Nov 30, 2024

This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.

SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 16, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscience

Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data

Hamid Karimi-Rouzbahani
The University of Queensland
Oct 11, 2024

There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany
Sep 29, 2024

Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange:contentReference[oaicite:8]{index=8}. Bernstein Conference 2024, held in Frankfurt am Main, featured discussions, keynote lectures, and poster sessions, and has established itself as one of the most renowned conferences worldwide in this field:contentReference[oaicite:9]{index=9}:contentReference[oaicite:10]{index=10}.

SeminarNeuroscience

Modelling the fruit fly brain and body

Srinivas Turaga
HHMI | Janelia
May 15, 2024

Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.

SeminarNeuroscienceRecording

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

Dmitri 'Mitya' Chklovskii
Flatiron Institute, Center for Computational Neuroscience
Feb 5, 2024

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

SeminarNeuroscience

Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions

Marija Markicevic
Yale
Jan 19, 2024

Understanding how macroscale brain dynamics are shaped by microscale mechanisms is crucial in neuroscience. We investigate this relationship in animal models by directly manipulating cellular properties and measuring whole-brain responses using resting-state fMRI. Specifically, we explore the impact of chemogenetically neuromodulating D1 medium spiny neurons in the dorsomedial caudate putamen (CPdm) on BOLD dynamics within a striato-thalamo-cortical circuit in mice. Our findings indicate that CPdm neuromodulation alters BOLD dynamics in thalamic subregions projecting to the dorsomedial striatum, influencing both local and inter-regional connectivity in cortical areas. This study contributes to understanding structure–function relationships in shaping inter-regional communication between subcortical and cortical levels.

SeminarNeuroscienceRecording

Tracking subjects' strategies in behavioural choice experiments at trial resolution

Mark Humphries
University of Nottingham
Dec 7, 2023

Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.

SeminarNeuroscience

Bio-realistic multiscale modeling of cortical circuits

Anton Arkhipov
Allen Institute
Nov 24, 2023

A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.

SeminarNeuroscienceRecording

Diffuse coupling in the brain - A temperature dial for computation

Eli Müller
The University of Sydney
Oct 6, 2023

The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of largescale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.

SeminarNeuroscience

Brain Connectivity Workshop

Ed Bullmore, Jianfeng Feng, Viktor Jirsa, Helen Mayberg, Pedro Valdes-Sosa
Sep 20, 2023

Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.

SeminarNeuroscience

Cognitive Computational Neuroscience 2023

Cate Hartley, Helen Barron, James McClelland, Tim Kietzmann, Leslie Kaelbling, Stanislas Dehaene
Aug 24, 2023

CCN is an annual conference that serves as a forum for cognitive science, neuroscience, and artificial intelligence researchers dedicated to understanding the computations that underlie complex behavior.

SeminarNeuroscienceRecording

Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing

Pulin Gong
The University of Sydney
Aug 11, 2023

The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.

SeminarNeuroscience

Bernstein Student Workshop Series

Cátia Fortunato
Imperial College London
Jun 15, 2023

The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.

SeminarNeuroscience

Bernstein Student Workshop Series

Lílian de Sardenberg Schmid
Max Planck Institute for Biological Cybernetics
May 4, 2023

The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.

SeminarNeuroscience

Bernstein Student Workshop Series

James Malkin
Apr 13, 2023

The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.

Conference

COSYNE 2023

Montreal, Canada
Mar 9, 2023

The COSYNE 2023 conference provided an inclusive forum for exchanging experimental and theoretical approaches to problems in systems neuroscience, continuing the tradition of bringing together the computational neuroscience community:contentReference[oaicite:5]{index=5}. The main meeting was held in Montreal followed by post-conference workshops in Mont-Tremblant, fostering intensive discussions and collaboration.

SeminarNeuroscience

Mapping learning and decision-making algorithms onto brain circuitry

Ilana Witten
Princeton
Nov 18, 2022

In the first half of my talk, I will discuss our recent work on the midbrain dopamine system. The hypothesis that midbrain dopamine neurons broadcast an error signal for the prediction of reward is among the great successes of computational neuroscience. However, our recent results contradict a core aspect of this theory: that the neurons uniformly convey a scalar, global signal. I will review this work, as well as our new efforts to update models of the neural basis of reinforcement learning with our data. In the second half of my talk, I will discuss our recent findings of state-dependent decision-making mechanisms in the striatum.

SeminarNeuroscienceRecording

Building System Models of Brain-Like Visual Intelligence with Brain-Score

Martin Schrimpf
MIT
Oct 5, 2022

Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.

Conference

Neuromatch 5

Virtual (online)
Sep 27, 2022

Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.

SeminarNeuroscienceRecording

ISAM-NIG Webinars

Hamed Ekhtiari, Colleen A Hanlon, Michael Fox, Victor M. Tang, Tonisha E Kearney-Ramos, Vaughn R Steele, Ghazaleh Soleimani, Deborah C.W. Klooster, Cristian Morales Carrasco, Lysianne Beynel, Jonathan Young, Kevin Walton
ISAM Neuroscience Interest Group, in collaboration with INTAM
Jul 27, 2022

Optimized Non-Invasive Brain Stimulation for Addiction Treatment

SeminarNeuroscience

Invariant neural subspaces maintained by feedback modulation

Laura Naumann
Bernstein Center for Computational Neuroscience, Berlin
Jul 14, 2022

This session is a double feature of the Cologne Theoretical Neuroscience Forum and the Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience of the Jülich Research Center.

SeminarNeuroscienceRecording

The Learning Salon

Anna Schapiro
UPenn
Jun 24, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Boris Gutkin
Jun 10, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscience

The evolution of computation in the brain: Insights from studying the retina

Tom Baden
University of Sussex (UK)
Jun 2, 2022

The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?

SeminarNeuroscienceRecording

The Learning Salon

David Badre
Brown
May 27, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Chris Summerfield
Oxford
May 13, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Gul Deniz Salali
UCL
Apr 29, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Sara Mednick
UC Irvine
Apr 15, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Nathaniel Daw
Princeton University
Mar 18, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Jessica Flack
Santa Fe Institute
Mar 11, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Evelina Fedorenko
MIT
Feb 25, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarCognitionRecording

Modeling Visual Attention in Neuroscience, Psychology, and Machine Learning

Grace Lindsay
University College London
Feb 15, 2022
SeminarNeuroscienceRecording

The Learning Salon

Fiery Cushman
Harvard University
Feb 11, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

György Buzsáki
NYU
Jan 28, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Marina Bedny
Johns Hopkins University
Jan 14, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Gina Poe
UCLA
Dec 17, 2021

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

The Learning Salon

Steven Piantadosi
University of California, Berkeley
Dec 10, 2021

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscience

Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli

Helene Schreyer
Gollisch lab, University Medical Center Göttingen, Germany
Dec 9, 2021

Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.

SeminarNeuroscience

A nonlinear shot noise model for calcium-based synaptic plasticity

Bin Wang
Aljadeff lab, University of California San Diego, USA
Dec 9, 2021

Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 2, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Panel: The Contribution of Models vs Data

Grace Lindsay
Columbia University
Dec 2, 2021
SeminarNeuroscience

Spontaneous activity competes with externally evoked responses in sensory cortex

Golan Karvat
Diester lab, University of Freiburg, Germany
Nov 25, 2021

The interaction between spontaneously and externally evoked neuronal activity is fundamental for a functional brain. Increasing evidence suggests that bursts of high-power oscillations in the 15-30 Hz beta-band represent activation of resting state networks and can mask perception of external cues. Yet demonstration of the effect of beta power modulation on perception in real-time is missing, and little is known about the underlying mechanism. In this talk I will present the methods we developed to fill this gap together with our recent results. We used a closed-loop stimulus-intensity adjustment system based on online burst-occupancy analyses in rats involved in a forepaw vibrotactile detection task. We found that the masking influence of burst-occupancy on perception can be counterbalanced in real-time by adjusting the vibration amplitude. Offline analysis of firing-rates and local field potentials across cortical layers and frequency bands confirmed that beta-power in the somatosensory cortex anticorrelated with sensory evoked responses. Mechanistically, bursts in all bands were accompanied by transient synchronization of cell assemblies, but only beta-bursts were followed by a reduction of firing-rate. Our closed loop approach reveals that spontaneous beta-bursts reflect a dynamic state that competes with external stimuli.

SeminarNeuroscience

Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation

Han Lu
Vlachos lab, University of Freiburg, Germany
Nov 25, 2021

Ever since Bliss and Lømo discovered the phenomenon of long-term potentiation (LTP) in rabbit dentate gyrus in the 1960s, Hebb’s rule—neurons that fire together wire together—gained popularity to explain learning and memory. Accumulating evidence, however, suggests that neural activity is homeostatically regulated. Homeostatic mechanisms are mostly interpreted to stabilize network dynamics. However, recent theoretical work has shown that linking the activity of a neuron to its connectivity within the network provides a robust alternative implementation of Hebb’s rule, although entirely based on negative feedback. In this setting, both natural and artificial stimulation of neurons can robustly trigger network rewiring. We used computational models of plastic networks to simulate the complex temporal dynamics of network rewiring in response to external stimuli. In parallel, we performed optogenetic stimulation experiments in the mouse anterior cingulate cortex (ACC) and subsequently analyzed the temporal profile of morphological changes in the stimulated tissue. Our results suggest that the new theoretical framework combining neural activity homeostasis and structural plasticity provides a consistent explanation of our experimental observations.

SeminarNeuroscience

“Mind reading” with brain scanners: Facts versus science fiction

John-Dylan Haynes
Charité - Universitätsmedizin, Berlin; Center for Advanced Neuroimaging; Bernstein Center for Computational Neuroscience
Nov 22, 2021

Every thought is associated with a unique pattern of brain activity. Thus, in principle, it should be possible to use these activity patterns as "brain fingerprints" for different thoughts and to read out what a person is thinking based on their brain activity alone. Indeed, using machine learning considerable progress has been made in such "brainreading" in recent years. It is now possible to decode which image a person is viewing, which film sequence they are watching, which emotional state they are in or which intentions they hold in mind. This talk will provide an overview of the current state of the art in brain reading. It will also highlight the main challenges and limitations of this research field. For example, mathematical models are needed to cope with the high dimensionality of potential mental states. Furthermore, the ethical concerns raised by (often premature) commercial applications of brain reading will also be discussed.

SeminarNeuroscience

When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?

Eric Shea-Brown
Department of Applied Mathematics, University of Washington
Nov 18, 2021

There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.

SeminarNeuroscience

Neural mechanisms of altered states of consciousness under psychedelics

Adeel Razi and Devon Stoliker
Monash Biomedical Imaging
Nov 11, 2021

Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.

SeminarNeuroscienceRecording

Computational Models of Compulsivity

Frederike Petzschner
Brown University
Nov 11, 2021
SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 5, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscience

Representation transfer and signal denoising through topographic modularity

Barna Zajzon
Morrison lab, Forschungszentrum Jülich, Germany
Nov 4, 2021

To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

SeminarNeuroscience

An optimal population code for global motion estimation in local direction-selective cells

Miriam Henning
Silies lab, University of Mainz, Germany
Nov 4, 2021

Neuronal computations are matched to optimally encode the sensory information that is available and relevant for the animal. However, the physical distribution of sensory information is often shaped by the animal’s own behavior. One prominent example is the encoding of optic flow fields that are generated during self-motion of the animal and will therefore depend on the type of locomotion. How evolution has matched computational resources to the behavioral constraints of an animal is not known. Here we use in vivo two photon imaging to record from a population of >3.500 local-direction selective cells. Our data show that the local direction-selective T4/T5 neurons in Drosophila form a population code that is matched to represent optic flow fields generated during translational and rotational self-motion of the fly. This coding principle for optic flow is reminiscent to the population code of local direction-selective ganglion cells in the mouse retina, where four direction-selective ganglion cells encode four different axes of self-motion encountered during walking (Sabbah et al., 2017). However, in flies we find six different subtypes of T4 and T5 cells that, at the population level, represent six axes of self-motion of the fly. The four uniformly tuned T4/T5 subtypes described previously represent a local snapshot (Maisak et al. 2013). The encoding of six types of optic flow in the fly as compared to four types of optic flow in mice might be matched to the high degrees of freedom encountered during flight. Thus, a population code for optic flow appears to be a general coding principle of visual systems, resulting from convergent evolution, but matching the individual ethological constraints of the animal.

ePoster

Advanced metamodelling on the o2S2PARC computational neurosciences platform facilitates stimulation selectivity and power efficiency optimization and intelligent control

Werner Van Geit, Cédric Bujard, Mads Rystok Bisgaard, Pedro Crespo-Valero, Esra Neufeld, Niels Kuster

FENS Forum 2024

ePoster

Computational Neuroscience in the Arabic region

Alaa Salah

Neuromatch 5