Computational
computational neuroscience
SISSA Neuroscience department
The Neuroscience Department of the International School for Advanced Studies (SISSA; https://www.sissa.it/research/neuroscience) invites expressions of interest from scientists from various fields of Neuroscience for multiple tenure-track positions with anticipated start in 2025. Ongoing neuroscience research at SISSA includes cognitive neuroscience, computational and theoretical neuroscience, systems neuroscience, molecular and cellular research as well as genomics and genetics. The Department intends to potentiate its activities in these fields and to strengthen cross-field interactions. Expressions of interest from scientists in any of these fields are welcome. The working and teaching language of SISSA is English. This is an equal opportunity career initiative and we encourage applications from qualified women, racial and ethnic minorities, and persons with disabilities. Candidates should have a PhD in a relevant field and a proven record of research achievements. A clear potential to promote and lead research activities, and a specific interest in training and supervising PhD students is essential. Interested colleagues should present an original and innovative plan for their independent future research. We encourage both proposals within existing fields at SISSA as well as novel ideas outside of those or spanning various topics and methodologies of Neuroscience. SISSA is an international school promoting basic and applied research in Neuroscience, Mathematics and Physics and dedicated to the training of PhD students. Lab space and other resources will be commensurate with the appointment. Shared facilities include cell culture rooms, viral vector facilities, confocal microscopes, animal facilities, molecular and biochemical facilities, human cognition labs with EEG, TMS, and eye tracking systems, mechatronics workshop, and computing facilities. Agreements with national and international MRI scanning facilities are also in place. SISSA encourages fruitful exchanges between neuroscientists and other researchers including data scientists, physicists and mathematicians. Interested colleagues are invited to send a single pdf file including a full CV, a brief description of past and future research interests (up to 1,000 words), and the names of three referees to neuro.search@sissa.it. Selected candidates will be invited for an online or in-person seminar and 1- on-1 meetings in summer/autumn 2024. Deadline: A first evaluation round will consider all applications submitted before 15 May 2024. Later applications might be considered if no suitable candidates have been identified yet.
Eugenio Piasini
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department (https://www.sissa.it/research/neuroscience). The selection procedure is now open. The application deadline is 28 March 2024. To learn how to apply, please visit https://phdcns.sissa.it/admission-procedure . Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
Dr. Jorge Mejias
The Cognitive and Systems Neuroscience Group at the University of Amsterdam is seeking a highly qualified and motivated candidate for a PhD position in Computational Neuroscience. The position falls under the Horizon Health Europe Consortium grant “Virtual Brain Twins for Personalized Treatment of Psychiatric Disorders”. This Consortium constitutes a large collaboration between different European institutions, aiming to develop personalized brain simulation software (“virtual brain twins”) to improve the diagnosis and treatment of schizophrenia. The main objective of this PhD project is to develop a biologically realistic computational model of the human brain, and use it to study alterations in brain activity associated with schizophrenia. Such model will make use of local neural mass models (developed by our Consortium partners) to simulate multiple brain areas, and will bring them together using structural connectivity data from human subjects. The model will be then used to explore the effects of schizophrenia-related alterations in brain dynamics and function, and to derive patient-specific virtual brain simulations to improve diagnosis and explore treatments in collaboration with clinical Consortium partners. The project will be supervised by Dr. Jorge Mejias, principal investigator in computational neuroscience and leader of the Dutch component of the Consortium, and by Prof. Dr. Cyriel Pennartz, head of the Cognitive and Systems Neuroscience Group. You will closely collaborate with other Consortium members, particularly with the team of Prof. Viktor Jirsa at Aix-Marseille University, and will also benefit from interactions with local colleagues including other theoretical, computational and experimental neuroscientists at the Cognitive and Systems Neuroscience Group. For more information and to apply, visit the following link: https://vacatures.uva.nl/UvA/job/PhD-position-in-Computational-Neuroscience/786924102/
Prof. KongFatt Wong-Lin
Postdoctoral Research Associate Position in Computational Neuroscience (Computational Modelling of Decision Making) Applications are invited for an externally funded Postdoctoral Research Associate position at the Intelligent Systems Research Centre (ISRC) in Ulster University, UK. The successful candidate will develop and apply computational modelling, and theoretical and analytical techniques to understand brain and behavioural data across primate species, and to apply biologically based neural network modelling to elucidate mechanisms underlying perceptual decision-making. The duration of the position is 24 months, from January 2024 till end of 2025. The personnel will be based at the ISRC in Ulster University, working with Prof. KongFatt Wong-Lin and his team, while collaborating closely with international collaborators in the USA and the Republic of Ireland, namely, Prof. Michael Shadlen at Columbia University (USA), Prof. Stephan Bickel at Northwell-Hofstra School of Medicine (USA), Prof. Redmond O'Connell at Trinity College Dublin (Ireland), Prof. Simon Kelly at University College Dublin (Ireland), and Prof. S. Shushruth at University of Pittsburgh (USA). The ISRC is dedicated to developing a bio-inspired computational basis for AI to power future cognitive technologies. This is achieved through understanding how the brain works at multiple levels, from cells to cognition and apply that understanding to create models and technologies that solve complex issues that face people and society. All applicants should hold a degree in in Computational Neuroscience, Computational Biology, Neuroscience, Computing, Engineering, Mathematics, Data Science, Physical Sciences, Biology, or a cognate area. Apply online: https://my.corehr.com/pls/coreportal_ulsp/erq_jobspec_version_4.display_form?p_company=1&p_internal_external=E&p_display_in_irish=N&p_applicant_no=&p_recruitment_id=023762&p_process_type=&p_form_profile_detail=&p_display_apply_ind=Y&p_refresh_search=Y Closing date for receipt of completed applications: 8th November 2023. Job Ref: 023762. For any informal enquiries regarding this position, please contact KongFatt Wong-Lin; email: k.wong-lin@ulster.ac.uk ; website: https://www.ulster.ac.uk/staff/k-wong-lin
SueYeon Chung, Center for Computational Neuroscience, Flatiron Institute
Flatiron Research Fellow (Postdoctoral Fellow), NeuroAI and Geometric Data Analysis Description Applications are invited for Flatiron Research Fellowships (FRF) in the NeuroAI and Geometric Data Analysis Group (SueYeon Chung, PI) at the Center for Computational Neuroscience at the Flatiron Institute of the Simons Foundation, whose focus is on understanding computation in the brain and artificial neural networks by: (1) analyzing geometries underlying neural or feature representations, embedding and transferring information, and (2) developing neural network models and learning rules guided by neuroscience. To do this, the group utilizes analytical methods from statistical physics, machine learning theory, and high-dimensional statistics and geometry.The CCN FRF program offers the opportunity for postdoctoral research in areas that have strong synergy with one or more of the existing research groups at CCN or other centers at the Flatiron Institute. In addition to carrying out an independent research program, Flatiron Research Fellows are expected to: disseminate their results through scientific presentations, publications, and software release, collaborate with other members of the CCN or Flatiron Institute, and participate in the scientific life of the CCN and Flatiron Institute by attending seminars, colloquia, and group meetings. Flatiron Research Fellows may have the opportunity to organize workshops and to mentor graduate and undergraduate students. The mission of CCN is to develop theories, models, and computational methods that deepen our knowledge of brain function — both in health and in disease. CCN takes a “systems" neuroscience approach, building models that are motivated by fundamental principles, that are constrained by properties of neural circuits and responses, and that provide insights into perception, cognition and behavior. This cross-disciplinary approach not only leads to the design of new model-driven scientific experiments, but also encapsulates current functional descriptions of the brain that can spur the development of new engineered computational systems, especially in the realm of machine learning. CCN’s current research groups include computational vision (Eero Simoncelli, PI), neural circuits and algorithms (Dmitri ‘Mitya’ Chklovskii, PI), neuroAI and geometric data analysis (SueYeon Chung, PI), and statistical analysis of neural data (Alex Williams, PI), and is planning to expand the number of research groups in the near term. Interested candidates should review the CCN public website for specific information on CCN’s research areas. Applicants who are interested in a joint appointment between two CCN research groups should submit the same application to both groups, noting the dual application in their research statement. Please note that Alex William’s statistical analysis of neural data group is not recruiting at CCN in 2023. FRF positions are two-year appointments and are generally renewed for a third year, contingent on performance. FRF receive a research budget and have access to the Flatiron Institute’s powerful scientific computing resources. FRF may be eligible for subsidized housing within walking distance of the CCN. Review of applications for positions starting between July and October 2024 will begin in November 2023. For more information about life at the Flatiron Institute, visit https://www.simonsfoundation.org/flatiron/careers.
University of Bristol
The role The School of Engineering Mathematics and Technology at the University of Bristol is seeking to appoint a Senior Lecturer / Associate Professor whose research encompasses neural computation, machine learning and AI. If you are earlier in your career the post is also available at Lecturer level. The University of Bristol is an exciting centre for research into the nature of computation and inference in humans, animals and machines. Our computational neuroscience group has made important contributions in, for example, Bayesian approaches to data and inference, biomimetic deep learning, anatomically-constrained neural networks and the theory of neural networks. The University has a long tradition of cross-disciplinary research and Computational Neuroscience is part of both the Bristol Neuroscience Network and the Intelligent Systems Group; we are recognised for our central role in the local neuroscience and machine learning/AI communities. You would be joining the University at an exciting time as we embark on a £500M investment in our new campus and while we create a home for the UK’s AI Research Resource with the UK’s most powerful supercomputer. We are committed to an inclusive and diverse environment where everyone can thrive. We welcome applicants from all backgrounds, especially those from under-represented communities. We offer flexible working arrangements to help balance professional and personal commitments. What will you be doing? You will conduct research at the interface between computational neuroscience and machine learning and contribute to the associated teaching on our degree programmes and to academic administration. You will take part in our lively research community and join our internationally renowned researchers in producing high-quality research with the potential to secure research funding.
University of Chicago - Grossman Center for Quantitative Biology and Human Behavior
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience.
Neuro-Electronics Research Flanders (NERF)
Want to do a PhD in neurosciences? Join us - we're recruiting! Neuro-Electronics Research Flanders (NERF) is an interdisciplinary research center located in Leuven, Belgium. We study neuronal circuits and develop new technologies to link circuit activity to brain function. We offer students an opportunity to carry out cutting-edge systems neuroscience research in an international and collaborative environment, with multi-disciplinary training and access to advanced neurotechnologies. More details about the positions and NERF can be found at https://nerfphdcall.sites.vib.be/en
Prof Mario Dipoppa
We are looking for candidates, who are eager to solve fundamental questions with a creative mindset. Candidates should have a strong publication track record in Computational Neuroscience or a related quantitative field, including but not limited to Computer Science, Machine Learning, Engineering, Bioinformatics, Physics, Mathematics, and Statistics. Candidates holding a Ph.D. degree interested in joining the laboratory as postdoctoral researchers should submit a CV including a publication list, a copy of a first-authored publication, a research statement describing past research and career goals (max. two pages), and contact information for two academic referees. The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code. Our research team, the Theoretical and Computational Neuroscience Laboratory, is on the main UCLA campus and enjoys close collaborations with the world-class neuroscience community there. The lab, led by Mario Dipoppa, is a cooperative and vibrant environment where all members are offered excellent scientific training and career mentoring. We strongly encourage candidates to apply early as applications will be reviewed until the positions are filled. The positions are available immediately with a flexible starting date. Please submit the application material as a single PDF file with your full name in the file name to mdipoppa@g.ucla.edu. Informal inquiries are welcome. For more details visit www.dipoppalab.com.
Prof Mario Dipoppa
We are looking for candidates with a keen interest in gaining research experience in Computational Neuroscience, pursuing their own projects, and supporting those of other team members. Candidates should have a bachelor's or master's degree in a quantitative discipline and strong programming skills, ideally in Python. Candidates interested in joining the laboratory as research associates should send a CV, a research statement describing past research and career goals (max. one page), and contact information for two academic referees. The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code. Our research team, the Theoretical and Computational Neuroscience Laboratory, is on the main UCLA campus and enjoys close collaborations with the world-class neuroscience community there. The lab, led by Mario Dipoppa, is a cooperative and vibrant environment where all members are offered excellent scientific training and career mentoring. We strongly encourage candidates to apply early as applications will be reviewed until the positions are filled. The positions are available immediately with a flexible starting date. Please submit the application material as a single PDF file with your full name in the file name to mdipoppa@g.ucla.edu. Informal inquiries are welcome. For more details visit www.dipoppalab.com.
Prof. Edmund Wascher
The core aspect of the position is the analysis of complex neurocognitive data (mainly EEG). The data comes from different settings and will also be analyzed using machine learning/AI. Scientific collaboration in a broad-based logitudinal study investigating the prerequisites for healthy lifelong working is wanted. The study is in its second wave of data collection. And represents a worldwide unique dataset. In addition to detailed standard psychological and physiological data, neurocognitive EEG data from more than 10 experimental settings and the entire genome of the subjects are available. For the analysis, machine learning will be used in addition to state-of-the-art EEG analysis methods. The position will be embedded in a group on computational neuroscience. Please find the full ad here: https://www.ifado.de/ifadoen/careers/current-job-offers/?noredirect=en_US#job1
Cognitive Neuroscience PhD program @ SISSA
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2023. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience Department (https://phdcns.sissa.it/) hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. The Department is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The first application deadline is 31 March 2023. To learn how to apply, please visit https://phdcns.sissa.it/admission-procedure. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
University of California Irvine
The Department of Cognitive Sciences at the University of California, Irvine (UCI) invites applications for an assistant professor (tenure-track) position with an anticipated start date of July 1, 2023. We are seeking scientists who study human vision, with a particular interest in those who combine an empirical research program with innovative approaches in neuroscience and/or cutting-edge computational tools such as machine learning. The successful candidate will establish a vital research program, and contribute to teaching, mentoring, and to inclusive excellence. They will interact with a dynamic and growing community in cognitive, computational, and neural sciences within the department (http://www.cogsci.uci.edu/) and the broader campus. Applicants should submit a cover letter, curriculum vitae, research and teaching statements, a statement describing past or potential contributions to diversity, equity, and inclusion, three recent or relevant publications, and the names and contact information of three references. The application requirements along with the online application can be found at: https://recruit.ap.uci.edu/JPF07912. To ensure full consideration, applications must be completed by December 15, 2022.
Rava Azeredo da Silveira, Botond Roska, Guilherme Testa-Silva (jointly)
The position is focused on understanding neuronal coding and representations using large scale voltage imaging datasets.
Rava Azeredo da Silveira
Research questions will be chosen from a range of topics in theoretical/computational neuroscience and cognitive science, involving either data analysis or theory, and drawing on recent machine learning approaches.
Rava Azeredo da Silveira
Research questions will be chosen from a range of topics in theoretical/computational neuroscience and cognitive science, involving either data analysis or theory, and drawing on recent machine learning approaches.
Tansu Celikel
The School of Psychology (psychology.gatech.edu/) at the GEORGIA INSTITUTE OF TECHNOLOGY (www.gatech.edu/) invites nominations and applications for 5 open-rank tenure-track faculty positionswith an anticipated start date of August 2023 or later. The successful applicant will be expected to demonstrate and develop an exceptional research program. The research area is open, but we are particularly interested in candidates whose scholarship complements existing School strengths in Adult Development and Aging, Cognition and Brain Science, Engineering Psychology, Work and Organizational Psychology, and Quantitative Psychology, and takes advantage of quantitative, mathematical, and/or computational methods. The School of Psychology is well-positioned in the College of Sciences at Georgia Tech, a University that promotes translational research from the laboratory and field to real-world applications in a variety of areas. The School offers multidisciplinary educational programs, graduate training, and research opportunities in the study of mind, brain, and behavior and the associated development of technologies that can improve human experience. Excellent research facilities support the School’s research and interdisciplinary graduate programs across the Institute. Georgia Tech’s commitment to interdisciplinary collaboration has fostered fruitful interactions between psychology faculty and faculty in the sciences, computing, business, engineering, design, and liberal arts. Located in the heart of Atlanta, one of the nation's most academic, entrepreneurial, creative and diverse cities with excellent quality of life, the School actively develops and maintains a rich network of academic and applied behavioral science/industrial partnerships in and beyond Atlanta. Candidates whose research programs foster collaborative interactions with other members of the School and further contribute to bridge-building with other academic and research units at Tech and industries are particularly encouraged to apply. Applications can be submitted online (bit.ly/Join-us-at-GT-Psych) and should include a Cover Letter, Curriculum Vitae (including a list of publications), Research Statement, Teaching Statement, DEI (diversity, equity, and inclusion) statement, and contact information of at least three individuals who have agreed to provide a reference in support of the application if asked. Evaluation of applications will begin October 10th, 2022 and continue until all positions are filled. Questions about this search can be addressed to faculty_search@psych.gatech.edu. Portal questions will be answered by Tikica Platt, the School’s HR director, and questions about positions by the co-chairs of the search committee, Ruth Kanfer and Tansu Celikel.
School of Engineering and Informatics, University of Sussex
Four permanent positions at the level of lecturer (assistant professor) are available at University of Sussex due to our rapid expansion in Computer Science and AI. The lectureships are for any topics in Computer Science/Informatics, including bio-inspired AI and computational neuroscience.
Wilkes Honors College of Florida Atlantic University
Tenure track hire in Computational Neuroscience
Prof. John Murray
The Swartz Program for Theoretical Neuroscience at Yale University invites applications for up to two postdoctoral positions in Theoretical and Computational Neuroscience, with flexible start date in 2022. Competitive candidates include those with a strong quantitative background who wish to gain neuroscience research experience. We especially encourage candidates with an interest in collaborating directly with experimental neuroscientists. The candidates will be expected to perform theoretical/computational studies relevant to one or more laboratories of the Swartz Program at Yale and will be encouraged to participate in an expanding quantitative biology environment at Yale. More details here: https://neurojobs.sfn.org/job/31363/postdoctoral-swartz-fellowship-positions-in-theoretical-and-computational-neuroscience-at-yale/
Jim Magnuson
3-year Ph.D. project, funded by la Caixa Foundation fellowship. Theme: Computational and neural bases of bilingualism. Goal: develop a model of bilingual development in the complementary learning systems framework. Direct link to position: https://finder.lacaixafellowships.org/finder?position=4739 Detailed Description: We seek a Ph.D. student with strong background (and masters) in a relevant domain (a cognitive, biological, or engineering field) and some experience with programming, data science, or computational modeling. The successful candidate will be involved in developing and computational models and/or running behavioral and neuroimaging studies, collecting and analyzing data, and disseminating the results in scientific conferences (presentations/posters) and peer-reviewed journals. The selected candidate will develop advanced technical and analytical skills and will have the opportunity to develop original experiments under the supervisors’ guidance. Applicants should demonstrate a keen interest in the key areas of cognitive neuroscience that are relevant for the research, coupled with strong computational skills (e.g., Python, Matlab, R). Experience with neuroscience techniques (e.g., MEG, EEG, MRI) and with analysis of neuroimaging data is desirable but not essential. A committed motivation to learning computational modelling and advanced analysis tools is a must, as well as the ability to acquire new skills and knowledge, and to work both independently and as part of a multidisciplinary team. A good command of English (the working language of the BCBL) is required; knowledge of Spanish and/or Basque is an advantage but not required. The candidate will enrol as a PhD student at the University of the Basque Country (UPV/EHU) and is expected to complete the PhD programme within 36 months. Training in complementary skills will be provided during the fellowship, including communication and research dissemination, IT and programming skills, ethics and professional conduct. The BCBL also provides support with living and welfare issues.
Dr. Scott Rich
The Neuron to Brain Lab is recruiting a Master’s student to contribute to our computational investigation of the role of heterogeneity in seizure resilience. This project will be directly mentored by Dr. Scott Rich, a senior postdoc under the supervision of Dr. Taufik Valiante and leader of the lab’s Computational Pillar. The project will focus on constructing a cortical neural network containing multiple populations of inhibitory interneurons, and using this network to assess how heterogeneity amongst inhibitory cells might uniquely contribute to seizure resilience. This project will utilize the lab’s unique access to electrophysiological data from live human cortical tissue to constrain neuron models, as well as a wealth of collaborations between the lab and other computational neuroscientists at the Krembil Brain Institute and the Krembil Centre for Neuroinformatics.
IMPRS for Brain & Behavior
Apply to our fully funded, international PhD program in the Max Planck Society! IMPRS for Brain & Behavior is a PhD program in Bonn, Germany that offers a competitive world-class PhD training and research program in the field of neuroethology. IMPRS for Brain & Behavior is a collaboration between research center caesar (a neuroethology institute of the Max Planck Society), the University of Bonn, and the German Center for Neurodegenerative Disease (DZNE) in Bonn. The Projects 20 labs with an enormous variety of research projects are seeking outstanding PhD candidates to join their research. See our website (https://imprs-brain-behavior.mpg.de/faculty_members) for further information on our faculty and possible doctoral projects. Successful candidates will work in a young and dynamic, interdisciplinary, international environment, embedded in the local scientific communities in Bonn, Germany.
Prof Marcel van Gerven
Title: Bio-inspired learning algorithms for efficient and robust neural network models with a focus on neuromorphic computing and control of complex environments This project is embedded within the European Laboratory for Learning and Intelligent Systems (ELLIS) Unit in Radboud AI. The aim is to develop highly effective algorithms for training artificial neural network models which make use of (biologically plausible) information local to the individual nodes of the network. This locality will allow efficient implementation and testing of these algorithms on neuromorphic computing systems. The efficacy of the algorithms will be tested in simulations. This will be done in the context of relevant reinforcement learning tasks using a control system theoretic approach to interaction with modelled environments. This work entails a theoretical (mathematical) development of state-of-the-art biologically plausible learning rules and reinforcement learning strategies alongside implementation/testing of these algorithms in the form of neuromorphic computing algorithms and agent-based artificial neural network modelling. For more information and to apply see: https://www.ru.nl/english/working-at/vacature/details-vacature/?recid=1147553&pad=%2fenglish&doel=embed&taal=uk
SISSA cognitive neuroscience PhD
Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for the Fall admission round are those funded by the "Fondo Sociale Europeo Plus", accessible through the first link above. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
Convergent large-scale network and local vulnerabilities underlie brain atrophy across Parkinson’s disease stages
AutoMIND: Deep inverse models for revealing neural circuit invariances
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia
Brain Emulation Challenge Workshop
Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.
Predicting traveling waves: a new mathematical technique to link the structure of a network to the specific patterns of neural activity
Towards open meta-research in neuroimaging
When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data
There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.
Bernstein Conference 2024
Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange:contentReference[oaicite:8]{index=8}. Bernstein Conference 2024, held in Frankfurt am Main, featured discussions, keynote lectures, and poster sessions, and has established itself as one of the most renowned conferences worldwide in this field:contentReference[oaicite:9]{index=9}:contentReference[oaicite:10]{index=10}.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions
Understanding how macroscale brain dynamics are shaped by microscale mechanisms is crucial in neuroscience. We investigate this relationship in animal models by directly manipulating cellular properties and measuring whole-brain responses using resting-state fMRI. Specifically, we explore the impact of chemogenetically neuromodulating D1 medium spiny neurons in the dorsomedial caudate putamen (CPdm) on BOLD dynamics within a striato-thalamo-cortical circuit in mice. Our findings indicate that CPdm neuromodulation alters BOLD dynamics in thalamic subregions projecting to the dorsomedial striatum, influencing both local and inter-regional connectivity in cortical areas. This study contributes to understanding structure–function relationships in shaping inter-regional communication between subcortical and cortical levels.
Tracking subjects' strategies in behavioural choice experiments at trial resolution
Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.
Bio-realistic multiscale modeling of cortical circuits
A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.
Diffuse coupling in the brain - A temperature dial for computation
The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of largescale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.
Brain Connectivity Workshop
Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.
Cognitive Computational Neuroscience 2023
CCN is an annual conference that serves as a forum for cognitive science, neuroscience, and artificial intelligence researchers dedicated to understanding the computations that underlie complex behavior.
Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing
The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
COSYNE 2023
The COSYNE 2023 conference provided an inclusive forum for exchanging experimental and theoretical approaches to problems in systems neuroscience, continuing the tradition of bringing together the computational neuroscience community:contentReference[oaicite:5]{index=5}. The main meeting was held in Montreal followed by post-conference workshops in Mont-Tremblant, fostering intensive discussions and collaboration.
Mapping learning and decision-making algorithms onto brain circuitry
In the first half of my talk, I will discuss our recent work on the midbrain dopamine system. The hypothesis that midbrain dopamine neurons broadcast an error signal for the prediction of reward is among the great successes of computational neuroscience. However, our recent results contradict a core aspect of this theory: that the neurons uniformly convey a scalar, global signal. I will review this work, as well as our new efforts to update models of the neural basis of reinforcement learning with our data. In the second half of my talk, I will discuss our recent findings of state-dependent decision-making mechanisms in the striatum.
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
Neuromatch 5
Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
ISAM-NIG Webinars
Optimized Non-Invasive Brain Stimulation for Addiction Treatment
Invariant neural subspaces maintained by feedback modulation
This session is a double feature of the Cologne Theoretical Neuroscience Forum and the Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience of the Jülich Research Center.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The evolution of computation in the brain: Insights from studying the retina
The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Modeling Visual Attention in Neuroscience, Psychology, and Machine Learning
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Synergy of color and motion vision for detecting approaching objects in Drosophila
I am working on color vision in Drosophila, identifying behaviors that involve color vision and understanding the neural circuits supporting them (Longden 2016). I have a long-term interest in understanding how neural computations operate reliably under changing circumstances, be they external changes in the sensory context, or internal changes of state such as hunger and locomotion. On internal state-modulation of sensory processing, I have shown how hunger alters visual motion processing in blowflies (Longden et al. 2014), and identified a role for octopamine in modulating motion vision during locomotion (Longden and Krapp 2009, 2010). On responses to external cues, I have shown how one kind of uncertainty in the motion of the visual scene is resolved by the fly (Saleem, Longden et al. 2012), and I have identified novel cells for processing translation-induced optic flow (Longden et al. 2017). I like working with colleagues who use different model systems, to get at principles of neural operation that might apply in many species (Ding et al. 2016, Dyakova et al. 2015). I like work motivated by computational principles - my background is computational neuroscience, with a PhD on models of memory formation in the hippocampus (Longden and Willshaw, 2007).
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
The Learning Salon
In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.
Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli
Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics
Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.
NMC4 Panel: The Contribution of Models vs Data
NMC4 Panel: NMC Around the Globe
For the first time, we are holding a NMC around the globe session, a panel of computational neuroscientists working in different continents who are willing to discuss their challenges and milestones in doing science and training researchers in their home country. We hope that our panelists can share their barriers, what they define as accomplishments and how they would like the future of computational neuroscience to evolve locally and internationally with our diverse NMC audience.
Spontaneous activity competes with externally evoked responses in sensory cortex
The interaction between spontaneously and externally evoked neuronal activity is fundamental for a functional brain. Increasing evidence suggests that bursts of high-power oscillations in the 15-30 Hz beta-band represent activation of resting state networks and can mask perception of external cues. Yet demonstration of the effect of beta power modulation on perception in real-time is missing, and little is known about the underlying mechanism. In this talk I will present the methods we developed to fill this gap together with our recent results. We used a closed-loop stimulus-intensity adjustment system based on online burst-occupancy analyses in rats involved in a forepaw vibrotactile detection task. We found that the masking influence of burst-occupancy on perception can be counterbalanced in real-time by adjusting the vibration amplitude. Offline analysis of firing-rates and local field potentials across cortical layers and frequency bands confirmed that beta-power in the somatosensory cortex anticorrelated with sensory evoked responses. Mechanistically, bursts in all bands were accompanied by transient synchronization of cell assemblies, but only beta-bursts were followed by a reduction of firing-rate. Our closed loop approach reveals that spontaneous beta-bursts reflect a dynamic state that competes with external stimuli.
Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation
Ever since Bliss and Lømo discovered the phenomenon of long-term potentiation (LTP) in rabbit dentate gyrus in the 1960s, Hebb’s rule—neurons that fire together wire together—gained popularity to explain learning and memory. Accumulating evidence, however, suggests that neural activity is homeostatically regulated. Homeostatic mechanisms are mostly interpreted to stabilize network dynamics. However, recent theoretical work has shown that linking the activity of a neuron to its connectivity within the network provides a robust alternative implementation of Hebb’s rule, although entirely based on negative feedback. In this setting, both natural and artificial stimulation of neurons can robustly trigger network rewiring. We used computational models of plastic networks to simulate the complex temporal dynamics of network rewiring in response to external stimuli. In parallel, we performed optogenetic stimulation experiments in the mouse anterior cingulate cortex (ACC) and subsequently analyzed the temporal profile of morphological changes in the stimulated tissue. Our results suggest that the new theoretical framework combining neural activity homeostasis and structural plasticity provides a consistent explanation of our experimental observations.
“Mind reading” with brain scanners: Facts versus science fiction
Every thought is associated with a unique pattern of brain activity. Thus, in principle, it should be possible to use these activity patterns as "brain fingerprints" for different thoughts and to read out what a person is thinking based on their brain activity alone. Indeed, using machine learning considerable progress has been made in such "brainreading" in recent years. It is now possible to decode which image a person is viewing, which film sequence they are watching, which emotional state they are in or which intentions they hold in mind. This talk will provide an overview of the current state of the art in brain reading. It will also highlight the main challenges and limitations of this research field. For example, mathematical models are needed to cope with the high dimensionality of potential mental states. Furthermore, the ethical concerns raised by (often premature) commercial applications of brain reading will also be discussed.
When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?
There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.
Computational Models of Compulsivity
Neural mechanisms of altered states of consciousness under psychedelics
Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.
Edge Computing using Spiking Neural Networks
Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).
Representation transfer and signal denoising through topographic modularity
To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.
An optimal population code for global motion estimation in local direction-selective cells
Neuronal computations are matched to optimally encode the sensory information that is available and relevant for the animal. However, the physical distribution of sensory information is often shaped by the animal’s own behavior. One prominent example is the encoding of optic flow fields that are generated during self-motion of the animal and will therefore depend on the type of locomotion. How evolution has matched computational resources to the behavioral constraints of an animal is not known. Here we use in vivo two photon imaging to record from a population of >3.500 local-direction selective cells. Our data show that the local direction-selective T4/T5 neurons in Drosophila form a population code that is matched to represent optic flow fields generated during translational and rotational self-motion of the fly. This coding principle for optic flow is reminiscent to the population code of local direction-selective ganglion cells in the mouse retina, where four direction-selective ganglion cells encode four different axes of self-motion encountered during walking (Sabbah et al., 2017). However, in flies we find six different subtypes of T4 and T5 cells that, at the population level, represent six axes of self-motion of the fly. The four uniformly tuned T4/T5 subtypes described previously represent a local snapshot (Maisak et al. 2013). The encoding of six types of optic flow in the fly as compared to four types of optic flow in mice might be matched to the high degrees of freedom encountered during flight. Thus, a population code for optic flow appears to be a general coding principle of visual systems, resulting from convergent evolution, but matching the individual ethological constraints of the animal.
Advanced metamodelling on the o2S2PARC computational neurosciences platform facilitates stimulation selectivity and power efficiency optimization and intelligent control
FENS Forum 2024
Computational Neuroscience in the Arabic region
Neuromatch 5