Learning
learning
Decoding stress vulnerability
Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Dr. Jasper Poort
Applications are invited for a postdoctoral research associate to study visual learning and attention brain circuits in mice. The post is based in the lab of Dr Jasper Poort in the Department of Physiology, Development and Neuroscience at the University of Cambridge. The successful candidate will work on a research project funded by the Wellcome Trust that will investigate the neural circuit mechanisms of visual learning and attention (see Poort et al., Neuron 2015, Khan et al, Nature Neuroscience 2018, Poort et al, Neuron 2021). The project combines two-photon calcium imaging, electrophysiology and optogenetic manipulation of different cell types and neural projections in visual cortical areas and decision-making brain areas to understand how mice (including mouse models of neurodevelopmental disorders) learn to become experts in different visually-guided decision-making tasks and flexibly switch attention between tasks. The successful applicant will join a supportive and multi-disciplinary research environment and collaborate with experts on learning and attention in rodents and humans, experts on learning and attention impairments in mental disorders, and computational neuroscientists. Applicants should have completed (or are about to submit) a PhD (research associate) or (under)graduate degree (research assistant) in neuroscience, biology, engineering, or other relevant disciplines. We are looking for someone with previous experience in two-photon imaging/electrophysiology/optogenetics/pharmacology/histology and behavioural training in mice, and strong data analysis skills (e.g. Matlab or Python). The research position is available from Feb 2022 onwards for an initial 2 year period with the possibility for extension. For more information about the lab see https://www.pdn.cam.ac.uk/svl/. Apply here: https://www.jobs.cam.ac.uk/job/32860/ In addition to the cover letter, CV, and contact details of 2 references, applicants are asked to provide a brief statement (500 words) describing the questions and approach they consider important for the study of the neural circuits for learning and attention in mice and their future research ambitions. The closing date for applications is 15th January 2022. Informal enquiries about the position can be made to Jasper Poort (jp816@cam.ac.uk). References: Poort, Wilmes,Chadwick, Blot, Sahani, Clopath, Mrsic-Flogel, Hofer, Khan (2021). Learning and attention increase neuronal response selectivity in mouse primary visual cortex through distinct mechanisms. Neuron https://doi.org/10.1016/j.neuron.2021.11.016 Khan, Poort, Chadwick, Blot, Sahani, Mrsic-Flogel, Hofer (2018). Distinct learning-induced changes in stimulus selectivity and interactions of GABAergic interneuron classes in visual cortex. Nature Neuroscience https://doi.org/10.1038/s41593-018-0143-z Poort, Khan, Pachitariu, Nemri, Orsolic, Krupic, Bauza, Sahani, Keller, Mrsic-Flogel, Hofer (2015). Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex. Neuron https://doi.org/10.1016/j.neuron.2015.05.037
Dr. Jasper Poort
Applications are invited for a postdoctoral research associate to study visual learning and attention brain circuits in mice. The post is based in the lab of Dr Jasper Poort in the Department of Physiology, Development and Neuroscience at the University of Cambridge. The successful candidate will work on a research project funded by the Wellcome Trust that will investigate the neural circuit mechanisms of visual learning and attention (see Poort et al., Neuron 2015, Khan et al, Nature Neuroscience 2018, Poort et al, Neuron 2021). The project combines two-photon calcium imaging, electrophysiology and optogenetic manipulation of different cell types and neural projections in visual cortical areas and decision-making brain areas to understand how mice (including mouse models of neurodevelopmental disorders) learn to become experts in different visually-guided decision-making tasks and flexibly switch attention between tasks. The successful applicant will join a supportive and multi-disciplinary research environment and collaborate with experts on learning and attention in rodents and humans, experts on learning and attention impairments in mental disorders, and computational neuroscientists. Applicants should have completed (or are about to submit) a PhD (research associate) or (under)graduate degree (research assistant) in neuroscience, biology, engineering, or other relevant disciplines. We are looking for someone with previous experience in two-photon imaging/electrophysiology/optogenetics/pharmacology/histology and behavioural training in mice, and strong data analysis skills (e.g. Matlab or Python). The research position is available from Feb 2022 onwards for an initial 2 year period with the possibility for extension. For more information about the lab see https://www.pdn.cam.ac.uk/svl/. Apply here: https://www.jobs.cam.ac.uk/job/32860/ In addition to the cover letter, CV, and contact details of 2 references, applicants are asked to provide a brief statement (500 words) describing the questions and approach they consider important for the study of the neural circuits for learning and attention in mice and their future research ambitions. The closing date for applications is 15th January 2022. Informal enquiries about the position can be made to Jasper Poort (jp816@cam.ac.uk). References: Poort, Wilmes,Chadwick, Blot, Sahani, Clopath, Mrsic-Flogel, Hofer, Khan (2021). Learning and attention increase neuronal response selectivity in mouse primary visual cortex through distinct mechanisms. Neuron https://doi.org/10.1016/j.neuron.2021.11.016 Khan, Poort, Chadwick, Blot, Sahani, Mrsic-Flogel, Hofer (2018). Distinct learning-induced changes in stimulus selectivity and interactions of GABAergic interneuron classes in visual cortex. Nature Neuroscience https://doi.org/10.1038/s41593-018-0143-z Poort, Khan, Pachitariu, Nemri, Orsolic, Krupic, Bauza, Sahani, Keller, Mrsic-Flogel, Hofer (2015). Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex. Neuron https://doi.org/10.1016/j.neuron.2015.05.037
Dr. Loren Frank
The Frank Lab at the University of California, San Francisco is looking for a Junior Specialist Technician to begin work January 2021 or later. This is a full-time paid position with a two-year minimum commitment required. During this time, the technician will work directly with a postdoctoral fellow and may also contribute to other lab projects as time allows. The lab investigates the neural underpinnings of learning and memory by collecting in vivo electrophysiological recordings from the hippocampus of rats while they learn and perform complex, memory-dependent behaviors. We have developed cutting-edge decoding algorithms to capture neural representations of spatial location as rats navigate an environment. The specific project aims to measure how such spatial representations are altered in aged rats compared to young rats and assess whether changes in spatial representation might drive changes in performance of a memory-dependent task. Please reach out to Anna Gillespie (postdoc) if interested. Responsibilities include: Handling and behavioral training of rats Construction of microelectrode drives Participation in rat implant surgeries Development of behavioral and neural data analyses Collection of large scale electrophysiological and behavioral datasets
Gatsby Computational Neuroscience Unit
4-Year PhD Programme in Theoretical Neuroscience and Machine Learning Call for Applications! Deadline: 13 November 2022 The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning; inference, coding and neural dynamics; Bayesian and kernel methods; deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical and computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment. Full funding is available regardless of nationality. The unit also welcomes applicants who have secured or are seeking funding from other sources. To apply, please visit www.ucl.ac.uk/gatsby/study-and-work/phd-programme
Prof. Carmen Varela
Projects in the lab aim to discover biomarkers of sleep oscillations that correlate with memory consolidation and sleep quality. Sleep disruption is a common symptom of neurodegenerative disorders and is thought to be linked to their progression. Thalamocortical activity during sleep is critical for the contribution of sleep to memory consolidation, but it is not clear what oscillatory and cellular activity patterns relate to sleep quality and memory consolidation. The candidate will assist with administrative and scientific aspects of this project, using rats to investigate the patterns of thalamic activity that promote healthy sleep function. More generally, the lab uses state-of-the-art techniques to investigate the neural network mechanisms of cognitive behavior, with a focus on learning and memory and on the role of the neuronal circuits formed by the thalamus.
Dr. Jasper Poort, Prof. Ole Paulsen, Prof. Jeff Dalley, Dr. Steve Sawiak
Applications are invited for two Postdoctoral Research Associate positions to study GABAergic mechanisms in mouse visual learning. One will primarily focus on measuring GABA using magnetic resonance spectroscopy and will be based in the laboratories of Professor Jeff Dalley (Dept. Psychology) and Dr Stephen Sawiak (Innes building, West Cambridge), the other will primarily focus on measuring GABA using recently developed genetically encoded GABA sensors with 2P microscopy and will be based in the laboratories of Professor Ole Paulsen and Dr Jasper Poort (both Department of Physiology, Development and Neuroscience) at the University of Cambridge. Both post holders will interact closely with each other and other members of the consortium. The successful candidates will investigate the role of GABAergic interneurons in visual learning. The project will combine MRS-GABA, two-photon GABA and calcium imaging, electrophysiology, optogenetic and pharmacological manipulation of cell types and neural projections in visual cortical areas and decision-making brain areas to understand how mice learn visual decision-making tasks. Applicants should have completed (or be about to submit) a PhD (Research Associate) or (under)graduate degree (Research Assistant) in neuroscience, biology, experimental psychology, engineering or other relevant disciplines. We are looking for someone with previous experience in imaging/electrophysiology/optogenetics/pharmacology and behavioural training in rodents, and strong data analysis skills (e.g. Matlab or Python). The positions are available from January 2022 onwards for an initial two year period with the possibility for extension. For more information about the labs see: https://www.bio.cam.ac.uk/facilities/imaging/transneuro , https://noggin.pdn.cam.ac.uk/ and https://www.pdn.cam.ac.uk/svl/. In addition to the cover letter, CV and contact details of two referees, applicants are asked to provide a brief statement (500 words) describing the questions and approach they consider important for the study of the role of cortical inhibition in visual learning and their future research ambitions. The research is part of a new Wellcome Trust funded Collaborative award that brings together a cross-disciplinary team of international experts to investigate the role of GABAergic inhibition in learning. The programme bridges work across species (mice, humans) and scales (local circuits, global networks) and capitalises on cutting-edge methodological developments in our team: a) human/animal ultra high-field MR Spectroscopy and functional brain imaging (Emir lab, Purdue; Kourtzi and Sawiak labs, Cambridge), b) neuroengineering tools including optical GABA sensors (Looger lab: UCSD) and electrophoretic drug delivery (Malliaras lab, Cambridge), cellular imaging, optogenetics, electrophysiology, neuropharmacology (Paulsen, Dalley, Poort labs, Cambridge; Rusakov lab: UCL). This network provides unique opportunities for cross-disciplinary training in innovative animal and human neuroscience methodologies, neurotechnology and computational science. Successful applicants will be integrated in a diverse collaborative team and have the opportunity to participate in workshops and exchange visits across labs to facilitate cross-disciplinary training and collaborative working. Apply here: https://www.jobs.cam.ac.uk/job/32553/ Informal enquiries about the position can be made to Jasper Poort (jp816@cam.ac.uk), Ole Paulsen (op210@cam.ac.uk), Jeff Dalley (jwd20@cam.ac.uk) and MR physicist Steve Sawiak (sjs80@cam.ac.uk).
Dr. Jasper Poort
Applications are invited for a postdoctoral research associate to study visual learning and attention brain circuits in mice. The post is based in the lab of Dr Jasper Poort in the Department of Physiology, Development and Neuroscience at the University of Cambridge. The successful candidate will work on a research project funded by the Wellcome Trust that will investigate the neural circuit mechanisms of visual learning and attention (see Poort et al., Neuron 2015, Khan et al, Nature Neuroscience 2018, Poort et al, Neuron 2021). The project combines two-photon calcium imaging, electrophysiology and optogenetic manipulation of different cell types and neural projections in visual cortical areas and decision-making brain areas to understand how mice (including mouse models of neurodevelopmental disorders) learn to become experts in different visually-guided decision-making tasks and flexibly switch attention between tasks. The successful applicant will join a supportive and multi-disciplinary research environment and collaborate with experts on learning and attention in rodents and humans, experts on learning and attention impairments in mental disorders, and computational neuroscientists. Applicants should have completed (or are about to submit) a PhD (research associate) or (under)graduate degree (research assistant) in neuroscience, biology, engineering, or other relevant disciplines. We are looking for someone with previous experience in two-photon imaging/electrophysiology/optogenetics/pharmacology/histology and behavioural training in mice, and strong data analysis skills (e.g. Matlab or Python). The research position is available from Feb 2022 onwards for an initial 2 year period with the possibility for extension. For more information about the lab see https://www.pdn.cam.ac.uk/svl/. Apply here: https://www.jobs.cam.ac.uk/job/32860/ In addition to the cover letter, CV, and contact details of 2 references, applicants are asked to provide a brief statement (500 words) describing the questions and approach they consider important for the study of the neural circuits for learning and attention in mice and their future research ambitions. The closing date for applications is 15th January 2022. Informal enquiries about the position can be made to Jasper Poort (jp816@cam.ac.uk). References: Poort, Wilmes,Chadwick, Blot, Sahani, Clopath, Mrsic-Flogel, Hofer, Khan (2021). Learning and attention increase neuronal response selectivity in mouse primary visual cortex through distinct mechanisms. Neuron https://doi.org/10.1016/j.neuron.2021.11.016 Khan, Poort, Chadwick, Blot, Sahani, Mrsic-Flogel, Hofer (2018). Distinct learning-induced changes in stimulus selectivity and interactions of GABAergic interneuron classes in visual cortex. Nature Neuroscience https://doi.org/10.1038/s41593-018-0143-z Poort, Khan, Pachitariu, Nemri, Orsolic, Krupic, Bauza, Sahani, Keller, Mrsic-Flogel, Hofer (2015). Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex. Neuron https://doi.org/10.1016/j.neuron.2015.05.037
Edwin Robertson
An exciting opportunity has arisen for an experienced Researcher to make a leading contribution to a project on “Modulating sleep with learning to enhance learning”, joining the laboratory of Professor Edwin M. Robertson within the Institute of Neuroscience & Psychology. This group examines the architecture of human memory. We integrate together a variety of cutting edge techniques including behavioural analysis, functional imaging and brain stimulation. Together, these are used to provide a picture of how the content and structure of a memory determines its fate (retained or enhanced) across different brain states (sleep vs. wakefulness). Currently, there is an opening in our group funded by the Leverhulme Trust (UK). It would suit a bright, enthusiastic, aspiring researcher willing to think carefully, creatively, critically and collaboratively (with the Principal Investigator) about their work in this project on human neuroscience. The group provides a superb training environment, with many using it as a foundation to secure independent fellowships, and faculty positions. The laboratory is housed within the Institute of Neuroscience & Psychology (INP), which is home to several Wellcome Trust Investigators, and national academy members (Royal Society, Edinburgh).
Flavio Donato
The mission of the Donato lab is to understand the underlying principles that drive the assembly and function of neuronal circuits for navigation and memory. To reach our aims, we use a vast array of cutting-edge techniques, like the ultrasound-guided injection of viral vectors for neural circuit tracing, calcium imaging, single-unit recordings, opto and chemogenetics, coupled to a quantitative approach for the study of mouse behavior and advanced computational approaches for the analysis of big datasets. By these means, we are able to follow the activity of large populations of neurons longitudinally, from infancy to adulthood, to understand how cognition arises in the mammalian brain. For more information, please visit our lab websites at www.donatolab.com , and https://www.biozentrum.unibas.ch/research/researchgroups/overview/unit/donato.
Melissa Caras
We are seeking a highly motivated applicant to join our team as a full-time research technician studying the neural basis of auditory perceptual learning. The successful candidate will be responsible for managing daily laboratory activities, including maintaining the animal colony, ordering supplies, preparing common use solutions, and overseeing lab safety compliance. In addition, the hired applicant will support ongoing projects in the lab by training and testing Mongolian gerbils on auditory detection and discrimination tasks, assisting with or performing survival surgeries, performing perfusions, and processing and imaging histological tissue. The candidate will have the opportunity to gain experience with a number of techniques, including in vivo electrophysiology, pharmacology, fiber photometry, operant conditioning, chemogenetics, and/or optogenetics. This position is an ideal fit for an individual looking to gain independent research experience before applying to graduate or medical school. This a one-year position, with the option to renew for a second year.
Flavio Donato
The mission of the Donato lab is to understand the underlying principles that drive the assembly and function of neuronal circuits for navigation and memory. To reach our aims, we use a vast array of cutting-edge techniques, like the ultrasound-guided injection of viral vectors for neural circuit tracing, calcium imaging, single-unit recordings, opto and chemogenetics, coupled to a quantitative approach for the study of mouse behavior and advanced computational approaches for the analysis of big datasets. By these means, we are able to follow the activity of large populations of neurons longitudinally, from infancy to adulthood, to understand how cognition arises in the mammalian brain. For more information, please visit our lab websites at www.donatolab.com , and https://www.biozentrum.unibas.ch/research/researchgroups/overview/unit/donato.
Dr. Melissa Caras
We are looking for a postdoctoral fellow to study neuromodulatory mechanisms supporting auditory perceptual learning in Mongolian gerbils. The successful applicant will measure and manipulate neuromodulatory release, and assess its impact on cortical activity in freely-moving animals engaged in auditory detection tasks. A variety of techniques will be used, including in vivo multichannel electrophysiology and pharmacology, fiber photometry, novel genetically-encoded fluorescent biosensors, chemogenetics and/or optogenetics. The candidate will be highly involved in all aspects of the research, from design to publication, and will additionally have the opportunity to mentor graduate and undergraduate students.
Prof Edwin Robertson
The project seeks to identify for a reciprocal relationship between learning and sleep. This would enable sleep not only to be affected by learning, but in turn to affect subsequent learning. Should this reciprocal interaction be played out within a common set of neuroplastic mechanisms it would allow learning to prime plasticity during subsequent sleep to enhance learning the next day in an unrelated memory task. We will combine high-density sleep recording with learning tasks to identify those aspects of sleep affected by earlier learning, and affecting subsequent learning. This will expose the student to behavioural analysis, sleep analysis, and advanced modelling of EEG data to preform statistical/cluster analysis to determine the relationship between sleep, and learning performance. The student will develop a wide range of skills. We will provide training across the scientific method and technical aspects including MATLAB, sleep scoring and sleep analysis. This will ensure the student will be in a strong position for future research, or seek exciting opportunities outside of the University/academic sector. There is substantial scope for the successful applicant to sculpt the project to their interests.
Jian Liu
Three PhD students funded by BBSRC MIBTP. Please find more information on https://sites.google.com/site/jiankliu/join-us 1. Towards a functional model for associative learning and memory formation Drs Jian Liu and Rodrigo Quian Quiroga, CSN/NPB, University of Leicester 2. Neuronal coupling across spatiotemporal scales and dimensions of cortical population activity Drs Michael Okun and Jian Liu, CSN/NPB, University of Leicester 3. Decoding movement from single neurons in motor cortex and their subcortical targets Drs Todor Gerdjikov and Jian Liu, CSN/NPB, University of Leicester
Rava Azeredo da Silveira
Several postdoctoral openings in the lab of Rava Azeredo da Silveira (Paris & Basel) The lab of Rava Azeredo da Silveira invites applications for Postdoctoral Researcher positions at ENS, Paris, and IOB, an associated institute of the University of Basel. Research questions will be chosen from a broad range of topics in theoretical/computational neuroscience and cognitive science (see the description of the lab’s activity, below). One of the postdoc positions to be filled in Basel will be part of a collaborative framework with Michael Woodford (Columbia University) and will involve projects relating the study of decision making to models of perception and memory. Candidates with backgrounds in mathematics, statistics, artificial intelligence, physics, computer science, engineering, biology, and psychology are welcome. Experience with data analysis and proficiency with numerical methods, in addition to familiarity with neuroscience topics and mathematical and statistical methods, are desirable. Equally desirable are a spirit of intellectual adventure, eagerness, and drive. The positions will come with highly competitive work conditions and salaries. Application deadline: Applications will be reviewed starting on 1 November 2020. How to apply: Please send the following information in one single PDF, to silveira@iob.ch: 1. letter of motivation; 2. statement of research interests, limited to two pages; 3. curriculum vitæ including a list of publications; 4. any relevant publications that you wish to showcase. In addition, please arrange for three letters of recommendations to be sent to the same email address. In all email correspondence, please include the mention “APPLICATION-POSTDOC” in the subject header, otherwise the application will not be considered. * ENS, together with a number of neighboring institutions (College de France, Institut Curie, ESPCI, Sorbonne Université, and Institut Pasteur), offers a rich scientific and intellectual environment, with a strong representation in computational neuroscience and related fields. * IOB is a research institute combining basic and clinical research. Its mission is to drive innovations in understanding vision and its diseases and develop new therapies for vision loss. IOB is an equal-opportunity employer with family-friendly work policies. * The Silveira Lab focuses on a range of topics, which, however, are tied together through a central question: How does the brain represent and manipulate information? Among the more concrete approaches to this question, the lab analyses and models neural activity in circuits that can be identified, recorded from, and perturbed experimentally, such as visual neural circuits in the retina and the cortex. Establishing links between physiological specificity and the structure of neural activity yields an understanding of circuits as building blocks of cerebral information processing. On a more abstract level, the lab investigates the representation of information in populations of neurons, from a statistical and algorithmic—rather than mechanistic—point of view, through theories of coding and data analyses. These studies aim at understanding the statistical nature of high-dimensional neural activity in different conditions, and how this serves to encode and process information from the sensory world. In the context of cognitive studies, the lab investigates mental processes such as inference, learning, and decision-making, through both theoretical developments and behavioral experiments. A particular focus is the study of neural constraints and limitations and, further, their impact on mental processes. Neural limitations impinge on the structure and variability of mental representations, which in turn inform the cognitive algorithms that produce behavior. The lab explores the nature of neural limitations, mental representations, and cognitive algorithms, and their interrelations.
N/A
The interdisciplinary M.Sc. Program in Cognitive Systems combines courses from neural/connectionist and symbolic Artificial Intelligence, Machine Learning, and Cognitive Psychology, to explore the fundamentals of perception, attention, learning, mental representation, and reasoning, in humans and machines. The M.Sc. Program is offered jointly by two public universities in Cyprus (the Open University of Cyprus and the University of Cyprus) and has been accredited by the national Quality Assurance Agency. The program is directed by academics from the participating universities, and courses are offered in English via distance learning by an international team of instructors.
Alessio Del Bue
The Italian Institute of Technology (IIT) and the University of Genoa are offering 4 PhD scholarships on Computational Vision, Automatic Recognition, and Learning. Research and training activities will be jointly conducted between the DITEN Department of the University of Genoa and IIT infrastructures in Genoa, at the PAVIS - Pattern Analysis and Computer Vision Research line. The PhD program will focus on various research topics, including 3D scene understanding, multi-modal learning, self-supervised and unsupervised deep learning, generative models for human and scene generation, novel graph operators for learning on large-scale and temporal data, and domain adaptation and generalization.
Vinita Samarasinghe
The research group uses diverse computational modeling approaches, including biological neural networks, cognitive modeling, and machine learning/artificial intelligence, to study learning and memory. The selected candidate will expand the computational modeling framework Cobel-RL and use it to study how episodic memory might be used to learn to navigate.
Stefano Nolfi
A scholarship of the Italian National PhD Program in Artificial Intelligence is available at the Institute of Cognitive Science and Technologies of the National Research Council in Rome. The research topic is “Self-organisation and learning in massive multiagent systems and robot swarms” with the supervision of Stefano Nolfi.
N/A
The position holder will be a member of the Hessian Center for Artificial Intelligence - hessian.AI and provides research at the Center and will also be a member of the Centre for Cognitive Science. The scientific focus of the position is on the computational and algorithmic modeling of behavioral data to understand the human mind. Exemplary research topics include computational level models of perception, cognition, decision making, action, and learning as well as extended behavior and social interactions in humans, algorithmic models that are able to simulate, predict, and explain human behavior, model-driven behavioral research on human cognition. The professorship is expected to strengthen the Hessian Center for Artificial Intelligence and TU Darmstadt’s Human Science department’s research focus on Cognitive Science. Depending on the candidate’s profile there is the opportunity to participate in joint research projects currently running at TU Darmstadt. This in particular includes the state funded cluster projects “The Adaptive Mind (TAM)” and “The Third Wave of Artificial Intelligence (3AI)”. In addition to excellent scientific credentials, we seek a strong commitment to teaching in the department’s Bachelor and Masters programs in Cognitive Science. Experience in attracting third-party funding as well as participation in academic governance is expected.
Prof Zoe Kourtzi
Post-doctoral position in Cognitive Computational Neuroscience at the Adaptive Brain Lab. The role involves combining high field brain imaging (7T fMRI, MR Spectroscopy), electrophysiology (EEG), computational modelling (machine learning, reinforcement learning) and interventions (TMS, tDCS, pharmacology) to understand network dynamics for learning and brain plasticity. The research programme bridges work across scales (local circuits, global networks) and species (humans, rodents) to uncover the neurocomputations that support learning and brain plasticity.
Dr. Lei Zhang
Dr. Lei Zhang is looking for 2x PhD students interested in the cognitive, computational, and neural basis of (social) learning and decision-making in health and disease. The newly opened ALP(E)N Lab (Adaptive Learning Psychology and Neuroscience Lab) addresses the fundamental question of the “adaptive brain” by studying the cognitive, computational, and neurobiological basis of (social) learning and decision-making in healthy individuals (across the lifespan), and in psychiatric disorders. The lab combines an array of approaches including neuroimaging, patient studies and computational modelling (particularly hierarchical Bayesian modelling) with behavioural paradigms inspired by learning theories. The lab is based at the Centre for Human Brain Health and Institute of Mental Health at the University of Birmingham, UK, with access to exceptional facilities including MRI, MEG, TMS, and fNIRS. Funding is available through two competitive schemes from the BBSRC and MRC that provide a stipend, fees (at UK rate) and a research allowance, amongst other benefits. International (ie, outside UK) applicants are welcome.
Grit Hein
The Translational Social Neuroscience Unit at the Julius-Maximilians-Universität Würzburg (JMU) in Würzburg, Germany is offering a 2-year 100% postdoc position in social neuroscience. The unit studies the psychological and neurobiological processes underlying social interactions and decisions. Current studies investigate drivers of human social behavior such as empathy, social norms, group membership, and egoism, as well as the social modulation of anxiety and pain processing. The unit uses neuroscientific methods (functional magnetic resonance imaging, electroencephalography) and psychophysiological measures (heart rate, skin conductance), combined with experimental paradigms from cognitive and social psychology and simulations of social interactions in virtual reality. The unit also studies social interactions in everyday life using smartphone-based surveys and mobile physiological sensors. The position is initially limited until September 30, 2025 with the option for extension.
Dr Silvia Maggi, Professor Mark Humphries, Dr Hazem Toutonji
A fully-funded PhD is available with Dr Silvia Maggi and Professor Mark Humphries (University of Nottingham) and Dr Hazem Toutonji (University of Sheffield). The project involves understanding how subjects respond to dynamic environments and requires approaches that can track subject's choice strategies at the resolution of single trials. The team recently developed a Bayesian inference algorithm that enables trial-resolution tracking of learning and exploration during learning. This project will build on this work to solve crucial problems of determining which of a set of behavioural strategies a subject is using and how to incorporate evidence uncertainty into its detection of the learning of strategies and transitions between them. Using the extended algorithm on datasets of rodents and humans performing decision tasks will let us test a range of hypotheses for how correct decisions are learnt and what innate strategies are used.
Dr Silvia Maggi, Professor Mark Humphries, Dr Hazem Toutonji
A fully-funded PhD is available with Dr Silvia Maggi and Professor Mark Humphries (University of Nottingham) and Dr Hazem Toutonji (University of Sheffield). The project involves understanding how subjects respond to dynamic environments and requires approaches that can track subject's choice strategies at the resolution of single trials. The project will build on a recently developed Bayesian inference algorithm that enables trial-resolution tracking of learning and exploration during learning. The project aims to solve crucial problems of determining which of a set of behavioural strategies a subject is using and how to incorporate evidence uncertainty into its detection of the learning of strategies and transitions between them. Using the extended algorithm on datasets of rodents and humans performing decision tasks will let us test a range of hypotheses for how correct decisions are learnt and what innate strategies are used.
AutoMIND: Deep inverse models for revealing neural circuit invariances
Memory Decoding Journal Club: Distinct synaptic plasticity rules operate across dendritic compartments in vivo during learning
Distinct synaptic plasticity rules operate across dendritic compartments in vivo during learning
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives
Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
From Spiking Predictive Coding to Learning Abstract Object Representation
In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Harnessing Big Data in Neuroscience: From Mapping Brain Connectivity to Predicting Traumatic Brain Injury
Neuroscience is experiencing unprecedented growth in dataset size both within individual brains and across populations. Large-scale, multimodal datasets are transforming our understanding of brain structure and function, creating opportunities to address previously unexplored questions. However, managing this increasing data volume requires new training and technology approaches. Modern data technologies are reshaping neuroscience by enabling researchers to tackle complex questions within a Ph.D. or postdoctoral timeframe. I will discuss cloud-based platforms such as brainlife.io, that provide scalable, reproducible, and accessible computational infrastructure. Modern data technology can democratize neuroscience, accelerate discovery and foster scientific transparency and collaboration. Concrete examples will illustrate how these technologies can be applied to mapping brain connectivity, studying human learning and development, and developing predictive models for traumatic brain injury (TBI). By integrating cloud computing and scalable data-sharing frameworks, neuroscience can become more impactful, inclusive, and data-driven..
Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons
Join Us for the Memory Decoding Journal Club! A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience. This time, we’re diving into a groundbreaking paper: "Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala. This study by Marios Abatis et al. demonstrates how fear conditioning strengthens synaptic connections between engram cells in the lateral amygdala, revealed through optogenetic identification of neuronal ensembles and electrophysiological measurements. The work provides crucial insights into memory formation mechanisms at the synaptic level, with implications for understanding anxiety disorders and developing targeted interventions. Presented by Dr. Kenneth Hayworth, this journal club will explore the paper's methodology linking engram cell reactivation with synaptic plasticity measurements, and discuss implications for memory decoding research.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Learning Representations of Complex Meaning in the Human Brain
Circuit Mechanisms of Remote Memory
Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.
Dimensionality reduction beyond neural subspaces
Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.
The Neurobiology of the Addicted Brain
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Understanding the complex behaviors of the ‘simple’ cerebellar circuit
Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Emergence of behavioural individuality from global microstructure of the brain and learning
Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus
Decomposing motivation into value and salience
Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.
Feedback-induced dispositional changes in risk preferences
Contrary to the original normative decision-making standpoint, empirical studies have repeatedly reported that risk preferences are affected by the disclosure of choice outcomes (feedback). Although no consensus has yet emerged regarding the properties and mechanisms of this effect, a widespread and intuitive hypothesis is that repeated feedback affects risk preferences by means of a learning effect, which alters the representation of subjective probabilities. Here, we ran a series of seven experiments (N= 538), tailored to decipher the effects of feedback on risk preferences. Our results indicate that the presence of feedback consistently increases risk-taking, even when the risky option is economically less advantageous. Crucially, risk-taking increases just after the instructions, before participants experience any feedback. These results challenge the learning account, and advocate for a dispositional effect, induced by the mere anticipation of feedback information. Epistemic curiosity and regret avoidance may drive this effect in partial and complete feedback conditions, respectively.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data
There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Beyond Homogeneity: Characterizing Brain Disorder Heterogeneity through EEG and Normative Modeling
Electroencephalography (EEG) has been thoroughly studied for decades in psychiatry research. Yet its integration into clinical practice as a diagnostic/prognostic tool remains unachieved. We hypothesize that a key reason is the underlying patient's heterogeneity, overlooked in psychiatric EEG research relying on a case-control approach. We combine HD-EEG with normative modeling to quantify this heterogeneity using two well-established and extensively investigated EEG characteristics -spectral power and functional connectivity- across a cohort of 1674 patients with attention-deficit/hyperactivity disorder, autism spectrum disorder, learning disorder, or anxiety, and 560 matched controls. Normative models showed that deviations from population norms among patients were highly heterogeneous and frequency-dependent. Deviation spatial overlap across patients did not exceed 40% and 24% for spectral and connectivity, respectively. Considering individual deviations in patients has significantly enhanced comparative analysis, and the identification of patient-specific markers has demonstrated a correlation with clinical assessments, representing a crucial step towards attaining precision psychiatry through EEG.
Top-down models of learning and decision-making in BG
Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.
Physical Activity, Sedentary Behaviour and Brain Health
Prosocial Learning and Motivation across the Lifespan
2024 BACN Early-Career Prize Lecture Many of our decisions affect other people. Our choices can decelerate climate change, stop the spread of infectious diseases, and directly help or harm others. Prosocial behaviours – decisions that help others – could contribute to reducing the impact of these challenges, yet their computational and neural mechanisms remain poorly understood. I will present recent work that examines prosocial motivation, how willing we are to incur costs to help others, prosocial learning, how we learn from the outcomes of our choices when they affect other people, and prosocial preferences, our self-reports of helping others. Throughout the talk, I will outline the possible computational and neural bases of these behaviours, and how they may differ from young adulthood to old age.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Visuomotor learning of location, action, and prediction
Probing neural population dynamics with recurrent neural networks
Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.
Trends in NeuroAI - Brain-like topography in transformers (Topoformer)
Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Mapping the Brain‘s Visual Representations Using Deep Learning
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Applied cognitive neuroscience to improve learning and therapeutics
Advancements in cognitive neuroscience have provided profound insights into the workings of the human brain and the methods used offer opportunities to enhance performance, cognition, and mental health. Drawing upon interdisciplinary collaborations in the University of California San Diego, Human Performance Optimization Lab, this talk explores the application of cognitive neuroscience principles in three domains to improve human performance and alleviate mental health challenges. The first section will discuss studies addressing the role of vision and oculomotor function in athletic performance and the potential to train these foundational abilities to improve performance and sports outcomes. The second domain considers the use of electrophysiological measurements of the brain and heart to detect, and possibly predict, errors in manual performance, as shown in a series of studies with surgeons as they perform robot-assisted surgery. Lastly, findings from clinical trials testing personalized interventional treatments for mood disorders will be discussed in which the temporal and spatial parameters of transcranial magnetic stimulation (TMS) are individualized to test if personalization improves treatment response and can be used as predictive biomarkers to guide treatment selection. Together, these translational studies use the measurement tools and constructs of cognitive neuroscience to improve human performance and well-being.
The multi-phase plasticity supporting winner effect
Aggression is an innate behavior across animal species. It is essential for competing for food, defending territory, securing mates, and protecting families and oneself. Since initiating an attack requires no explicit learning, the neural circuit underlying aggression is believed to be genetically and developmentally hardwired. Despite being innate, aggression is highly plastic. It is influenced by a wide variety of experiences, particularly winning and losing previous encounters. Numerous studies have shown that winning leads to an increased tendency to fight while losing leads to flight in future encounters. In the talk, I will present our recent findings regarding the neural mechanisms underlying the behavioral changes caused by winning.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Improving Language Understanding by Generative Pre Training
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
Cell-type-specific plasticity shapes neocortical dynamics for motor learning
How do cortical circuits acquire new dynamics that drive learned movements? This webinar will focus on mouse premotor cortex in relation to learned lick-timing and explore high-density electrophysiology using our silicon neural probes alongside region and cell-type-specific acute genetic manipulations of proteins required for synaptic plasticity.
Learning representations of specifics and generalities over time
There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.
Maintaining Plasticity in Neural Networks
Nonstationarity presents a variety of challenges for machine learning systems. One surprising pathology which can arise in nonstationary learning problems is plasticity loss, whereby making progress on new learning objectives becomes more difficult as training progresses. Networks which are unable to adapt in response to changes in their environment experience plateaus or even declines in performance in highly non-stationary domains such as reinforcement learning, where the learner must quickly adapt to new information even after hundreds of millions of optimization steps. The loss of plasticity manifests in a cluster of related empirical phenomena which have been identified by a number of recent works, including the primacy bias, implicit under-parameterization, rank collapse, and capacity loss. While this phenomenon is widely observed, it is still not fully understood. This talk will present exciting recent results which shed light on the mechanisms driving the loss of plasticity in a variety of learning problems and survey methods to maintain network plasticity in non-stationary tasks, with a particular focus on deep reinforcement learning.
Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.
Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)
Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory
Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Machine learning for reconstructing, understanding and intervening on neural interactions
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Neuromatch 5
Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.
Analysis of burst sequences in mouse prefrontal cortex during learning
Bernstein Conference 2024
How Do Bees See the World? A (Normative) Deep Reinforcement Learning Model for Insect Navigation
Bernstein Conference 2024
Biological-plausible learning with a two compartment neuron model in recurrent neural networks
Bernstein Conference 2024
Building mechanistic models of neural computations with simulation-based machine learning
Bernstein Conference 2024
Co-Design of Analog Neuromorphic Systems and Cortical Motifs with Local Dendritic Learning Rules
Bernstein Conference 2024
Knocking out co-active plasticity rules in neural networks reveals synapse type-specific contributions for learning and memory
Bernstein Conference 2024
Competition and integration of sensory signals in a deep reinforcement learning agent
Bernstein Conference 2024
Computational implications of motor primitives for cortical motor learning
Bernstein Conference 2024
Controversial Opinions on Model Based and Model Free Reinforcement Learning in the Brain
Bernstein Conference 2024
Continual learning using dendritic modulations on view-invariant feedforward weights
Bernstein Conference 2024
Correcting cortical output: a distributed learning framework for motor adaptation
Bernstein Conference 2024
The cost of behavioral flexibility: a modeling study of reversal learning using a spiking neural network
Bernstein Conference 2024
Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron
Bernstein Conference 2024
DelGrad: Exact gradients in spiking networks for learning transmission delays and weights
Bernstein Conference 2024
Dendrites endow artificial neural networks with accurate, robust and parameter-efficient learning
Bernstein Conference 2024
Effect of experience on context-dependent learning in recurrent networks
Bernstein Conference 2024
Dual role, single pathway: A pyramidal cell model of feedback integration in function and learning
Bernstein Conference 2024
Efficient learning of deep non-negative matrix factorisation networks
Bernstein Conference 2024
Enhancing learning through neuromodulation-aware spiking neural networks
Bernstein Conference 2024
Evaluating Memory Behavior in Continual Learning using the Posterior in a Binary Bayesian Network
Bernstein Conference 2024
A Study of a biologically plausible combination of Sparsity, Weight Imprinting and Forward Inhibition in Continual Learning
Bernstein Conference 2024
Evolutionary algorithms support recurrent plasticity in spiking neural network models of neocortical task learning
Bernstein Conference 2024
Excitatory and inhibitory neurons exhibit distinct roles for task learning, temporal scaling, and working memory in recurrent spiking neural network models of neocortex.
Bernstein Conference 2024
Experiment-based Models to Study Local Learning Rules for Spiking Neural Networks
Bernstein Conference 2024
A feedback control algorithm for online learning in Spiking Neural Networks and Neuromorphic devices
Bernstein Conference 2024
Identifying cortical learning algorithms using Brain-Machine Interfaces
Bernstein Conference 2024
Inhibition-controlled Hebbian learning unifies phenomenological and normative models of plasticity
Bernstein Conference 2024
Learning an environment model in real-time with core knowledge and closed-loop behaviours
Bernstein Conference 2024
Learning Hebbian/Anti-Hebbian networks in continuous time
Bernstein Conference 2024
Learning neuronal manifolds for interacting neuronal populations
Bernstein Conference 2024
Learning predictable factors from sequences: it’s not only about slow features
Bernstein Conference 2024
Learning and using predictive maps for strategic planning
Bernstein Conference 2024
Linking Spontaneous Synaptic Activity to Learning
Bernstein Conference 2024
Neuromodulated online cognitive maps for reinforcement learning
Bernstein Conference 2024
Object detection with deep learning and attention feedback loops
Bernstein Conference 2024
Quantifying the learning dynamics of single subjects in a reversal learning task with change point analysis
Bernstein Conference 2024
Replay of Chaotic Dynamics through Differential Hebbian Learning with Transmission Delays
Bernstein Conference 2024
Response variability can accelerate learning in feedforward-recurrent networks
Bernstein Conference 2024
'Reusers' and 'Unlearners' display distinct effects of forgetting on reversal learning in neural networks
Bernstein Conference 2024
Adaptive brain-computer interfaces based on error-related potentials and reinforcement learning
Bernstein Conference 2024