Behavior
Behavior
Developmental emergence of personality
The Nature versus Nurture debate has generally been considered from the lens of genome versus experience dichotomy and has dominated our thinking about behavioral individuality and personality traits. In contrast, the role of nonheritable noise during brain development in behavioral variation is understudied. Using the Drosophila melanogaster visual system, I will discuss our efforts to dissect how individuality in circuit wiring emerges during development, and how that helps generate individual behavioral variation.
Dr Jonathan Tang
This position will focus on the neural mechanisms underlying action learning in mice. Scientifically the project aims to understand the neural circuits, activities and behavioral dynamics behind how animals learn what actions to take for reward. Dopaminergic systems and associated circuitries will be the focus of investigation. This lab integrates wireless inertial sensors, closed loop algorithms, optogenetics and neural recording to pursue this goal.
Torben Ott
Research in the Decision Circuits Lab located (BCCN Berlin, Germany) focuses on the neural principles that underlie decision-making. By employing state-of-the-art tools in systems neuroscience, we seek to develop cortical circuits and ask how dopamine and serotonin enable adaptive decisions. Your job: (i) research in systems neuroscience focusing on the role of cortical serotonin for temporal cognition and decision-making (ii) use of state-of-the-art experimental tools such as quantitative psychophysics, high-throughput electrophysiology, chemical sensor imaging, and optogenetics in rats (iii) collaborative development of analyses and computational models of behavior and cortical functions.
Dr. Alexander Herman
We seek a postdoc to work on an exciting federally funded project examining cognitive effort and flexibility in traumatic brain injury (TBI). This project will use a combination of transcranial alternating current stimulation and computational modeling to improve symptoms of mental fatigue after TBI. Our interdisciplinary, joint psychiatry-neurosurgery lab offers a unique opportunity to learn or improve skills in electrophysiology, non-invasive brain stimulation, neuroeconomics, and computational modeling. The ideal candidate has a background in both engineering/computer science and cognitive neuroscience or a strong willingness to learn one or the other. The position offers the opportunity to gain experience working with patients to collect data, but strong staff support exists for this already. The focus of the post-doc will be on analyzing data and writing papers. See our website at www.hermandarrowlab.com
Dr. Michele Insanally
The Insanally Lab is hiring postdocs to study the neural basis of auditory perception and learning. We incorporate a wide range of techniques including behavioral paradigms, in vivo multi-region neural recordings, optogenetics, chemogenetics, fiber photometry, and novel computational methods. Our lab is super supportive, collaborative, and we take mentoring seriously! Located at Pitt, our lab is part of a large systems neuroscience community that includes CNBC and CMU. For inquiries, feel free to reach out to me here: mni@pitt.edu. To find out more about our work, visit Insanallylab.com
Dr. Peter Petersen
We are seeking a highly motivated postdoctoral fellow for a project addressing the generation and functions of theta oscillations in spatial navigation using systems neuroscience and population-level approaches. The research will take place at the Department of Neuroscience (in.ku.dk) at University of Copenhagen in the lab of Dr. Peter C. Petersen (PetersenLab.org). The project involves performing electrophysiological recordings from freely moving animals using chronically implanted high-density Neuropixels silicon probes and applying optogenetics for single cell tagging, and behavioral manipulations. Learn more about the position and the application process here: https://employment.ku.dk/faculty/?show=157309
Prof. Carmen Varela
Projects in the lab aim to discover biomarkers of sleep oscillations that correlate with memory consolidation and sleep quality. Sleep disruption is a common symptom of neurodegenerative disorders and is thought to be linked to their progression. Thalamocortical activity during sleep is critical for the contribution of sleep to memory consolidation, but it is not clear what oscillatory and cellular activity patterns relate to sleep quality and memory consolidation. The candidate will assist with administrative and scientific aspects of this project, using rats to investigate the patterns of thalamic activity that promote healthy sleep function. More generally, the lab uses state-of-the-art techniques to investigate the neural network mechanisms of cognitive behavior, with a focus on learning and memory and on the role of the neuronal circuits formed by the thalamus.
Dr. Alessandro Filosa
Our group is looking for a curious and motivated PhD student for a project aiming to understand the mechanisms regulating the function of neuronal circuits controlling stress. Neuronal circuits eliciting stress responses evolved to help animals to cope with adverse environmental conditions. Responses to mild short-term stress are beneficial, since adverse physical and psychological events activate adaptive reactions essential for survival, such as avoidance of potential threats. The same circuits, when not functioning properly, can also induce emergence of maladaptive behaviors. In humans, dysregulation of stress circuits leads to several debilitating psychiatric conditions, including post-traumatic stress disorder, depression, anxiety, and occupational burnout. Therefore, investigating the neuronal substrate of stress is not only a fascinating endeavor to understand the basic functioning of neuromodulatory circuits, but it is also important for developing better treatments for psychiatric diseases. We are exploiting the small size and translucency of the zebrafish central nervous system to study in vivo dynamic interactions between neurons involved in regulating stress. The work in our group aims to link basic circuit neuroscience to pathology by focusing on the following questions: • How do discrete neuronal circuits modulate stress-related behavior? • How does stress lead to adaptive changes modulating brain activity and behavior? • What are the cellular, synaptic, and circuit alterations leading to maladaptive responses to stress? We use a range of techniques in zebrafish genetics, molecular biology, advanced microscopy, and behavioral analysis. For more information and for applying follow this link:https://www.mdc-berlin.de/career/jobs/phd-student-0
Professors Yale cohen and Jennifer groh
Yale Cohen (U. Penn; https://auditoryresearchlaboratory.weebly.com/) and Jennifer Groh (Duke U.; www.duke.edu/~jmgroh) seeks a full-time post-doctoral scholar. Our labs study visual, auditory, and multisensory processing in the brain using neurophysiological and computational techniques. We have a newly funded NIH grant to study the contribution of corticofugal connectivity in non-human primate models of auditory perception. The work will take place at the Penn site. This will be a full-time, 12-month renewable appointment. Salary will be commensurate with experience and consistent with NIH NRSA stipends. To apply, send your CV along with contact information for 2 referees to: compneuro@sas.upenn.edu. For questions, please contact Yale Cohen (ycohen@pennmedicine.upenn.edu). Applications will be considered on a rolling basis, and we anticipate a summer 2022 start date. Penn is an Affirmative Action / Equal Opportunity Employer committed to providing employment opportunity without regard to an individual’s age, color, disability, gender, gender expression, gender identity, genetic information, national origin, race, religion, sex, sexual orientation, or veteran status
Terufumi Fujiwara
My lab will investigate neural mechanisms of motor control in Drosophila by combining neurophysiology, behavior, engineering, genetics, and quantitative analysis.
Prof Ian Oldenburg
The Oldenburg lab combines optics, multiphoton optogenetics, calcium imaging, and computation to understand the motor system. The overall goal of the Oldenburg Lab is to understand the causal relationship between neural activity and motor actions. We use advanced optical techniques such as multiphoton holographic optogenetics to control neural activity with an incredible degree of precision, writing complex patterns of activity to distributed groups of cells. Only by writing activity into the brain at the scale in which it naturally occurs (individual neurons firing distinct patterns of action potentials) can we test theories of what population activity means. We read out the effects of these precise manipulations locally with calcium imaging, in neighboring brain regions with electrophysiology, and at the 'whole animal level' through changes in behavior. We are looking for curious motivated, and talented people with a wide range of skill sets to join our group at all levels from Technician to Postdoc.
Kevin Bolding
We are recruiting lab personnel. If systems neuroscience at the intersection of olfaction and memory excites you, now is an excellent time to get in touch. Our goal is to discover fundamental rules and mechanisms that govern information storage and retrieval in neural systems. Our primary focus will be establishing the changes in neural circuit and population dynamics that correspond to odor recognition memory. To bring our understanding of this process to a new level of rigor we will apply quantitative statistical approaches to relate behavioral signatures of odor recognition to activity and plasticity in olfactory circuits. We will use in vivo electrophysiology and calcium imaging to capture the activity of large neural populations during olfactory experience, and we will apply cell-type specific perturbations of activity and plasticity to piece apart how specific circuit connections contribute.
Carmen Falcone
Postdoctoral scholar position available for highly motivated candidates with a PhD in Neuroscience, Molecular or Cell Biology, Evolutionary or Developmental Biology, Biochemistry or related fields, to join the research group of Carmen Falcone, PhD, at the Department of Neuroscience in SISSA (Trieste, Italy), starting from April 2022. This position will provide the opportunity to be part of a new research team working in an exciting project aimed to study the functions of interlaminar astrocytes in the primate brain, with iPSCs and xenograph mouse models, and molecular, cellular and behavioral techniques. Although the contract for this job is for one year, there is the possibility for it to be renewed for a maximum of 5 years, if the candidate and the lab are a good fit.
Melissa Caras
We are seeking a highly motivated applicant to join our team as a full-time research technician studying the neural basis of auditory perceptual learning. The successful candidate will be responsible for managing daily laboratory activities, including maintaining the animal colony, ordering supplies, preparing common use solutions, and overseeing lab safety compliance. In addition, the hired applicant will support ongoing projects in the lab by training and testing Mongolian gerbils on auditory detection and discrimination tasks, assisting with or performing survival surgeries, performing perfusions, and processing and imaging histological tissue. The candidate will have the opportunity to gain experience with a number of techniques, including in vivo electrophysiology, pharmacology, fiber photometry, operant conditioning, chemogenetics, and/or optogenetics. This position is an ideal fit for an individual looking to gain independent research experience before applying to graduate or medical school. This a one-year position, with the option to renew for a second year.
Leena Ali Ibrahim
A funded postdoctoral position is available in the laboratory of Leena Ali Ibrahim at KAUST, Saudi Arabia A major focus of the laboratory is understanding the circuit mechanisms of how internal states of an animal via top-down circuits influences sensory processing during development and learning. In addition we are interested in exploring how the balance of bottom-up and top-down signaling is disrupted in neurodevelopmental and neuropsychiatric disorders. A number of potential projects can be supported depending on interest and expertise. We use a variety of approaches including in vivo two-photon microscopy and behavior, slice physiology, optogenetics and viral targeting of defined cell types.
Prof Laura Busse
2 PhD positions as part of interdisciplinary collaborations are available in Laura Busse’s lab at the Faculty of Biology of the LMU Munich and Thomas Euler’s lab at the Center for Integrative Neuroscience in Tübingen. The fully funded positions are part of the DFG-funded Collaborative Research Center Robust vision: Inference Principles and neural mechanisms. In the project, we will explore the visual input received by the mouse visual system under natural conditions and study how such input is processed along key stages of the early visual system. The project continues from Qiu et al. (2020, bioRxiv) and will include opportunities for performing recordings of the visual input encountered by freely behaving mice under naturalistic conditions, statistical analysis of the recorded video material, quantitative assessment of behavior, and measurements (2P calcium imaging / electrophysiology) of neural responses from mouse retina, visual thalamus and primary visual cortex in response to naturalistic movies. One of the positions will be place in Thomas Euler’s lab (U Tuebingen) with a focus on retinal aspects of the project. A complementary PhD position in Laura Busse’s lab (LMU Munich), with a focus on central vision aspects, will closely collaborate on the development of the recording hardware and the software framework for data analysis and modelling. Both positions offer a thriving scientific environment, structured PhD programs and numerous opportunities for networking and exchange. Interested candidates are welcome to establish contact via email to thomas.euler@cin.uni-tuebingen.de and busse@bio.lmu.de. More information about the labs can be found here https://eulerlab.de/ and https://visioncircuitslab.org/ For applications to Thomas Euler’s position within the project, see further instructions on the lab’s webpage (https://eulerlab.de/positions/). For applications to Laura Busse’s position within the project, please visit the LMU Graduate School of Systemic Neuroscience (GSN, http://www.gsn.uni-muenchen.de). The deadline for applications is February 15.
Professor Maria Geffen
The Geffen laboratory at the University of Pennsylvania has multiple postdoctoral positions open in systems neuroscience with the broad goal of understanding the neuronal circuits for auditory perception and learning. We are looking for energetic and talented scientists interested in studying the function of the brain. The postdoctoral fellow will have the opportunity to learn and apply a host of systems neuroscience techniques, including two-photon imaging of population activity, optogenetic manipulations, large-scale electrophysiology and behavior in mice. Prior experience with some of these methods is preferred, but not required. Depending on the candidate’s interests, all projects provide an opportunity to learn and apply advanced computational methods, including dynamic systems analysis of neuronal population activity; Bayesian approaches for understanding the relation between neuronal activity and behavior; machine learning methods to understand large-scale neuronal activity. We currently have openings for postdoctoral fellows for three projects: (1) Neuronal mechanisms for predictive coding: Auditory perception relies on predicting statistics of incoming signals, be it identifying the speech of a conversation partner in a crowded room or recognizing the sound of a babbling brook in a forest. The human brain detects statistical regularities in sounds as a fundamental aspect of prediction, evidenced by reduced responses to repeated sound patterns and enhanced responses to unexpected sounds. Multiple studies demonstrate that the neuronal responses to regular signals are reduced through adaptation, which can contribute to prediction. However, adaptation alone is not sufficient to account for prediction and studies at cellular and neuronal population levels in animals thus far lend only partial support to existing theories of predictive coding. The goal of the project is to close this gap in knowledge and to determine the circuits that predict signals and detect statistical regularity and its violation in auditory behavior. Funded by NIH NIDCD. (2) Neuronal circuits for learning-driven changes in auditory perception: Everyday auditory behavior depends critically on learning-driven changes in auditory perception that rely on neuronal plasticity within the auditory pathway. By combining state-of-the-art optogenetic, electrophysiological, behavioral and computational approaches, the project seeks to identify the function of specific circuit elements in auditory learning. Funded by NIH NIDCD. (3) Neuronal mechanisms for hearing under uncertainty: In everyday life, because both sensory signals and neuronal responses are noisy, important cognitive tasks, such as auditory categorization, are based on uncertain information. To overcome this limitation, listeners incorporate other types of signals, such as the statistics of sounds over short and long time scales and signals from other sensory modalities into their categorization decision processes. This project will identify the contribution of specific cell types to categorization and the neuronal mechanisms for how contextual signals bias auditory categorization. In collaboration with Dr. Yale Cohen and Dr. Konrad Kording, funded by NIH BRAIN Initiative. Our laboratory is a close community of fun-loving scientists, striving to help each other while exploring the mysteries of the brain. Our trainees have won numerous awards and have been awarded government and private foundation grants. We value diversity and promote equity in the scientific community and beyond. The systems neuroscience community at the University of Pennsylvania is top-notch and highly collaborative, and postdoctoral fellows will have opportunities to engage in interdepartmental initiatives, including MindCore, MINS and CNI. Penn has a gorgeous campus and offers many cultural activities. Philadelphia is a beautiful city with world-class music, food and entertainment. To apply, please email Dr. Geffen at mgeffen@pennmedicine.upenn.edu : a cover letter (summarize your prior research experience, why you are interested in the position, and your future plans) and your CV.
Prof. Maria de la Paz Fernandez
A NIH-funded postdoctoral position is available in the Barnard Neurobiology Lab. Our lab is in the Department of Neuroscience & Behavior at Barnard College, a liberal arts college in Manhattan affiliated with Columbia University. Imaging facilities are available at Columbia’s Zuckerman Institute which is a few blocks away. The position is fully funded for at least four years. Ideally the position would start in the summer of 2021, but the start date is flexible. We are looking for a highly motivated and accomplished scientist interested in studying circadian timekeeping and sleep in Drosophila. The research project will involve the following techniques: - Genetic manipulation of neural networks supporting timekeeping and entrainment - Behavioral analysis of clock controlled behavioral outputs - Live-imaging and immunohistochemical analysis of clock neurons Desired qualifications: The candidate should have a Ph.D. in Biology, Neuroscience, Biochemistry, or a related field. Experience with live-imaging, immunohistochemistry, or Drosophila neurobiology is desired. However, all candidates with a track record of scientific accomplishment and a strong interest in circadian biology in any species are encouraged to apply. Please submit a CV and a cover letter explaining your research and career goals. Please also include the names and contact information for three professional references. contact: mfernand@barnard.edu Application Link: https://jobs.sciencecareers.org/job/547743/postdoctoral-research-fellow/
Dr Sylvia Schröder
“Integration of visual and behavioural signals in the early visual system” In this project, you will discover how retinal, cortical and neuromodulatory inputs shape the responses of visual neurons in the superior colliculus. The goal of your Phd project is to understand the mechanisms of signal integration, i.e. which inputs to the superior colliculus shape its neural activity, and the advantages of this integration for visual processing. You will use two-photon imaging in awake mice to simultaneously record activity of neurons in the superior colliculus as well as of axons originating in the retina, visual cortex, or brainstem nuclei such as the dorsal raphe (serotonin). You will compare the responses of the axonal inputs to those in the neurons, and you will observe how these signals change depending on the visual input and the behaviour of the animal. In the beginning of your project, you will develop an advanced imaging technique in collaboration with our industrial partner, Scientifica. You will adapt the existing two-photon microscope to image two separate fields of view simultaneously. This technique, termed multi-region imaging, will enable you to record inputs and outputs of superior colliculus at sufficient detail, speed, and quantity.
Prof. Li Zhaoping
Experiments on Rodent behavior and neural recording motivated by computational considerations, see https://webdav.tuebingen.mpg.de/agzl/data/Postdoc_Neursoscience_July_2020.pdf position open until filled.
Santiago Jaramillo
The Jaramillo lab investigates the neural basis of expectation, attention, decision-making and learning in the context of sound-driven behaviors in mice. Projects during the postdoctoral fellowship will study these cognitive processes by monitoring and manipulating neuronal activity during adaptive behaviors with cell-type and pathway specificity using techniques such as two-photon microscopy (including mesoscope imaging), high-density electrophysiology (using Neuropixels probes), and optogenetic manipulation of neural activity.
IMPRS for Brain & Behavior
Join our unique transatlantic PhD program in neuroscience! The International Max Planck Research School (IMPRS) for Brain and Behavior is a unique transatlantic collaboration between two Max Planck Neuroscience institutes – the Max Planck-associated research center caesar and the Max Planck Florida Institute for Neuroscience – and the partner universities, University of Bonn and Florida Atlantic University. It offers a completely funded international PhD program in neuroscience in either Bonn, Germany, or Jupiter, Florida. We offer an exciting opportunity to outstanding Bachelor's and/or Master's degree holders (or equivalent) from any field (life sciences, mathematics, physics, computer science, engineering, etc.) to be immersed in a stimulating environment that provides novel technologies to elucidate the function of brain circuits from molecules to animal behavior. The comprehensive and diverse expertise of the faculty in the exploration of brain-circuit function using advanced imaging and optogenetic techniques combined with comprehensive training in fundamental neurobiology will provide students with an exceptional level of knowledge to pursue a successful independent research career. Apply to Bonn, Germany by November 15, 2020 or to Florida, USA by December 1, 2020!
Prof. Carmen Varela
Gain expertise in rodent electrophysiology and behavior studying thalamic cellular and network mechanisms of sleep and memory consolidation. We have several openings to study the mechanisms of synaptic plasticity and cellular spike dynamics that contribute to episodic memory consolidation during sleep. Trainees will gain expertise in systems neuroscience using electrophysiology (cell ensemble and LFP recording) and behavior in rats, as well as expertise on the thalamic molecular and cellular mechanisms underlying normal and disrupted sleep-dependent memory consolidation and the use of non-invasive technologies to regulate them. Some of the projects are part of collaborations with Harvard University and the Scripps Florida Institute.
Ann Kennedy
We investigate principles of computation in meso-scale biological neural networks, and the role of these networks in shaping animal behavior. We work in collaboration with experimental neuroscientists recording neural activity in freely moving animals engaged in complex behaviors, to investigate how animals' environments, actions, and internal states are represented across multiple brain areas. Our work is especially inspired by the interaction between subcortical neural populations organized into heavily recurrent neural circuits, including basal ganglia and nuclei of the hypothalamus. Project in the lab include 1) developing novel supervised, semi-supervised, and unsupervised approaches to studying the structure of animal behavior, 2) using behavior as a common basis with which to model the interactions between multiple brain areas, and 3) studying computation and dynamics in networks of heterogenous neurons communicating with multiple neuromodulators and neuropeptides. The lab will also soon begin collecting behavioral data from freely interacting mice in a variety of model lines and animal conditions, to better chart the space of interactions between animal state and behavior expression. Come join us!
Carsten Mehring
The interdisciplinary MSc program in Neuroscience at the University of Freiburg, Germany, provides theoretical and practical training in neuroscience, covering both the foundations and latest research in the field. It is taught by lecturers from an international scientific community from multiple faculties and neuroscience research centres. The modular course structure caters to the specific backgrounds and research interests of each individual student with specialisations in neural circuits and behavior, computational neuroscience and neurotechnology. All courses are taught in English.
Michael J Frank, PhD
The Department of Cognitive and Psychological Sciences (CoPsy) at Brown University invites applications for a tenure-track Assistant or tenured Associate Professor beginning July 1, 2025. We anticipate hiring up to two candidates with the area open. However, candidates' research must focus on one of the following research themes: (1) the interface between artificial intelligence and cognition, (2) collective cognition and behavior, and/or (3) mechanisms of mental and brain health. In addition to building an externally funded nationally recognized research program, a successful candidate will provide effective instruction and advising to a diverse group of graduate and undergraduate students, and be willing to interact with colleagues from a wide range of disciplines and academic backgrounds. The CoPsy department is committed to creating a welcoming and supportive environment that values diversity. The department strongly encourages qualified candidates who can contribute to equity, diversity, and inclusion through their teaching, mentoring, service and research. Successful candidates are expected to have (1) a track record of excellence in research, (2) a well-specified research plan that is likely to lead to research funding, and (3) a readiness to contribute to teaching and mentoring at both the undergraduate and graduate level. The CoPsy department has a highly interdisciplinary research environment in the study of mind, brain, and behavior, offering curricular programs in Psychology, Cognitive Science, Cognitive Neuroscience, and Behavioral Decision Sciences. The Department is located in the heart of campus, and is associated with many Centers and Initiatives at the University, including the Carney Institute for Brain Science, Watson Institute for International and Public Affairs, Data Science Initiative, Center for the Study of Race and Ethnicity in America.
Generation and use of internal models of the world to guide flexible behavior
Astrocytes: From Metabolism to Cognition
Different brain cell types exhibit distinct metabolic signatures that link energy economy to cellular function. Astrocytes and neurons, for instance, diverge dramatically in their reliance on glycolysis versus oxidative phosphorylation, underscoring that metabolic fuel efficiency is not uniform across cell types. A key factor shaping this divergence is the structural organization of the mitochondrial respiratory chain into supercomplexes. Specifically, complexes I (CI) and III (CIII) form a CI–CIII supercomplex, but the degree of this assembly varies by cell type. In neurons, CI is predominantly integrated into supercomplexes, resulting in highly efficient mitochondrial respiration and minimal reactive oxygen species (ROS) generation. Conversely, in astrocytes, a larger fraction of CI remains unassembled, freely existing apart from CIII, leading to reduced respiratory efficiency and elevated mitochondrial ROS production. Despite this apparent inefficiency, astrocytes boast a highly adaptable metabolism capable of responding to diverse stressors. Their looser CI–CIII organization allows for flexible ROS signaling, which activates antioxidant programs via transcription factors like Nrf2. This modular architecture enables astrocytes not only to balance energy production but also to support neuronal health and influence complex organismal behaviors.
Memory Decoding Journal Club: Behavioral time scale synaptic plasticity underlies CA1 place fields
Behavioral time scale synaptic plasticity underlies CA1 place fields
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Developmental and evolutionary perspectives on thalamic function
Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.
An Ecological and Objective Neural Marker of Implicit Unfamiliar Identity Recognition
We developed a novel paradigm measuring implicit identity recognition using Fast Periodic Visual Stimulation (FPVS) with EEG among 16 students and 12 police officers with normal face processing abilities. Participants' neural responses to a 1-Hz tagged oddball identity embedded within a 6-Hz image stream revealed implicit recognition with high-quality mugshots but not CCTV-like images, suggesting optimal resolution requirements. Our findings extend previous research by demonstrating that even unfamiliar identities can elicit robust neural recognition signatures through brief, repeated passive exposure. This approach offers potential for objective validation of face processing abilities in forensic applications, including assessment of facial examiners, Super-Recognisers, and eyewitnesses, potentially overcoming limitations of traditional behavioral assessment methods.
Neural mechanisms of optimal performance
When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Skin-brain axis for tactile sensations
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Examining dexterous motor control in children born with a below elbow deficiency
Structural & Functional Neuroplasticity in Children with Hemiplegia
About 30% of children with cerebral palsy have congenital hemiplegia, resulting from periventricular white matter injury, which impairs the use of one hand and disrupts bimanual co-ordination. Congenital hemiplegia has a profound effect on each child's life and, thus, is of great importance to the public health. Changes in brain organization (neuroplasticity) often occur following periventricular white matter injury. These changes vary widely depending on the timing, location, and extent of the injury, as well as the functional system involved. Currently, we have limited knowledge of neuroplasticity in children with congenital hemiplegia. As a result, we provide rehabilitation treatment to these children almost blindly based exclusively on behavioral data. In this talk, I will present recent research evidence of my team on understanding neuroplasticity in children with congenital hemiplegia by using a multimodal neuroimaging approach that combines data from structural and functional neuroimaging methods. I will further present preliminary data regarding functional improvements of upper extremities motor and sensory functions as a result of rehabilitation with a robotic system that involves active participation of the child in a video-game setup. Our research is essential for the development of novel or improved neurological rehabilitation strategies for children with congenital hemiplegia.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Circuit Mechanisms of Remote Memory
Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.
Dimensionality reduction beyond neural subspaces
Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Mouse Motor Cortex Circuits and Roles in Oromanual Behavior
I’m interested in structure-function relationships in neural circuits and behavior, with a focus on motor and somatosensory areas of the mouse’s cortex involved in controlling forelimb movements. In one line of investigation, we take a bottom-up, cellularly oriented approach and use optogenetics, electrophysiology, and related slice-based methods to dissect cell-type-specific circuits of corticospinal and other neurons in forelimb motor cortex. In another, we take a top-down ethologically oriented approach and analyze the kinematics and cortical correlates of “oromanual” dexterity as mice handle food. I'll discuss recent progress on both fronts.
Mapping the neural dynamics of dominance and defeat
Social experiences can have lasting changes on behavior and affective state. In particular, repeated wins and losses during fighting can facilitate and suppress future aggressive behavior, leading to persistent high aggression or low aggression states. We use a combination of techniques for multi-region neural recording, perturbation, behavioral analysis, and modeling to understand how nodes in the brain’s subcortical “social decision-making network” encode and transform aggressive motivation into action, and how these circuits change following social experience.
The circuitry behind innate visual behavior
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Understanding the complex behaviors of the ‘simple’ cerebellar circuit
Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Decomposing motivation into value and salience
Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms
Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.
Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.
Principles of Cognitive Control over Task Focus and Task
2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).
Influence of the context of administration in the antidepressant-like effects of the psychedelic 5-MeO-DMT
Psychedelics like psilocybin have shown rapid and long-lasting efficacy on depressive and anxiety symptoms. Other psychedelics with shorter half-lives, such as DMT and 5-MeO-DMT, have also shown promising preliminary outcomes in major depression, making them interesting candidates for clinical practice. Despite several promising clinical studies, the influence of the context on therapeutic responses or adverse effects remains poorly documented. To address this, we conducted preclinical studies evaluating the psychopharmacological profile of 5-MeO-DMT in contexts previously validated in mice as either pleasant (positive setting) or aversive (negative setting). Healthy C57BL/6J male mice received a single intraperitoneal (i.p.) injection of 5-MeO-DMT at doses of 0.5, 5, and 10 mg/kg, with assessments at 2 hours, 24 hours, and one week post-administration. In a corticosterone (CORT) mouse model of depression, 5-MeO-DMT was administered in different settings, and behavioral tests mimicking core symptoms of depression and anxiety were conducted. In CORT-exposed mice, an acute dose of 0.5 mg/kg administered in a neutral setting produced antidepressant-like effects at 24 hours, as observed by reduced immobility time in the Tail Suspension Test (TST). In a positive setting, the drug also reduced latency to first immobility and total immobility time in the TST. However, these beneficial effects were negated in a negative setting, where 5-MeO-DMT failed to produce antidepressant-like effects and instead elicited an anxiogenic response in the Elevated Plus Maze (EPM).Our results indicate a strong influence of setting on the psychopharmacological profile of 5-MeO-DMT. Future experiments will examine cortical markers of pre- and post-synaptic density to correlate neuroplasticity changes with the behavioral effects of 5-MeO-DMT in different settings.
Error Consistency between Humans and Machines as a function of presentation duration
Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.
Metabolic-functional coupling of parvalbmunin-positive GABAergic interneurons in the injured and epileptic brain
Parvalbumin-positive GABAergic interneurons (PV-INs) provide inhibitory control of excitatory neuron activity, coordinate circuit function, and regulate behavior and cognition. PV-INs are uniquely susceptible to loss and dysfunction in traumatic brain injury (TBI) and epilepsy but the cause of this susceptibility is unknown. One hypothesis is that PV-INs use specialized metabolic systems to support their high-frequency action potential firing and that metabolic stress disrupts these systems, leading to their dysfunction and loss. Metabolism-based therapies can restore PV-IN function after injury in preclinical TBI models. Based on these findings, we hypothesize that (1) PV-INs are highly metabolically specialized, (2) these specializations are lost after TBI, and (3) restoring PV-IN metabolic specializations can improve PV-IN function as well as TBI-related outcomes. Using novel single-cell approaches, we can now quantify cell-type-specific metabolism in complex tissues to determine whether PV-IN metabolic dysfunction contributes to the pathophysiology of TBI.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Gender, trait anxiety and attentional processing in healthy young adults: is a moderated moderation theory possible?
Three studies conducted in the context of PhD work (UNIL) aimed at proving evidence to address the question of potential gender differences in trait anxiety and executive control biases on behavioral efficacy. In scope were male and female non-clinical samples of adult young age that performed non-emotional tasks assessing basic attentional functioning (Attention Network Test – Interactions, ANT-I), sustained attention (Test of Variables of Attention, TOVA), and visual recognition abilities (Object in Location Recognition Task, OLRT). Results confirmed the intricate nature of the relationship between gender and health trait anxiety through the lens of their impact on processing efficacy in males and females. The possibility of a gendered theory in trait anxiety biases is discussed.
How to tell if someone is hiding something from you? An overview of the scientific basis of deception and concealed information detection
I my talk I will give an overview of recent research on deception and concealed information detection. I will start with a short introduction on the problems and shortcomings of traditional deception detection tools and why those still prevail in many recent approaches (e.g., in AI-based deception detection). I want to argue for the importance of more fundamental deception research and give some examples for insights gained therefrom. In the second part of the talk, I will introduce the Concealed Information Test (CIT), a promising paradigm for research and applied contexts to investigate whether someone actually recognizes information that they do not want to reveal. The CIT is based on solid scientific theory and produces large effects sizes in laboratory studies with a number of different measures (e.g., behavioral, psychophysiological, and neural measures). I will highlight some challenges a forensic application of the CIT still faces and how scientific research could assist in overcoming those.
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
The multi-phase plasticity supporting winner effect
Aggression is an innate behavior across animal species. It is essential for competing for food, defending territory, securing mates, and protecting families and oneself. Since initiating an attack requires no explicit learning, the neural circuit underlying aggression is believed to be genetically and developmentally hardwired. Despite being innate, aggression is highly plastic. It is influenced by a wide variety of experiences, particularly winning and losing previous encounters. Numerous studies have shown that winning leads to an increased tendency to fight while losing leads to flight in future encounters. In the talk, I will present our recent findings regarding the neural mechanisms underlying the behavioral changes caused by winning.
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Evolution of convulsive therapy from electroconvulsive therapy to Magnetic Seizure Therapy; Interventional Neuropsychiatry
In April, we will host Nolan Williams and Mustafa Husain. Be prepared to embark on a journey from early brain stimulation with ECT to state-of-the art TMS protocols and magnetic seizure therapy! The talks will be held on Thursday, April 25th at noon ET / 6PM CET. Nolan Williams, MD, is an associate professor of Psychiatry and Behavioral Science at Stanford University. He developed the SAINT protocol, which is the first FDA-cleared non-invasive, rapid-acting neuromodulation treatment for treatment-resistant depression. Mustafa Husain, MD, is an adjunct professor of Psychiatry and Behavioral Sciences at Duke University and a professor of Psychiatry and Neurology at UT Southwestern Medical Center, Dallas. He will tell us about “Evolution of convulsive therapy from electroconvulsive therapy to Magnetic Seizure Therapy”. As always, we will also get a glimpse at the “Person behind the science”. Please register va talks.stimulatingbrains.org to receive the (free) Zoom link, subscribe to our newsletter, or follow us on Twitter/X for further updates!
Modeling human brain development and disease: the role of primary cilia
Neurodevelopmental disorders (NDDs) impose a global burden, affecting an increasing number of individuals. While some causative genes have been identified, understanding the human-specific mechanisms involved in these disorders remains limited. Traditional gene-driven approaches for modeling brain diseases have failed to capture the diverse and convergent mechanisms at play. Centrosomes and cilia act as intermediaries between environmental and intrinsic signals, regulating cellular behavior. Mutations or dosage variations disrupting their function have been linked to brain formation deficits, highlighting their importance, yet their precise contributions remain largely unknown. Hence, we aim to investigate whether the centrosome/cilia axis is crucial for brain development and serves as a hub for human-specific mechanisms disrupted in NDDs. Towards this direction, we first demonstrated species-specific and cell-type-specific differences in the cilia-genes expression during mouse and human corticogenesis. Then, to dissect their role, we provoked their ectopic overexpression or silencing in the developing mouse cortex or in human brain organoids. Our findings suggest that cilia genes manipulation alters both the numbers and the position of NPCs and neurons in the developing cortex. Interestingly, primary cilium morphology is disrupted, as we find changes in their length, orientation and number that lead to disruption of the apical belt and altered delamination profiles during development. Our results give insight into the role of primary cilia in human cortical development and address fundamental questions regarding the diversity and convergence of gene function in development and disease manifestation. It has the potential to uncover novel pharmacological targets, facilitate personalized medicine, and improve the lives of individuals affected by NDDs through targeted cilia-based therapies.
Mitochondrial diversity in the mouse and human brain
The basis of the mind, of mental states, and complex behaviors is the flow of energy through microscopic and macroscopic brain structures. Energy flow through brain circuits is powered by thousands of mitochondria populating the inside of every neuron, glial, and other nucleated cell across the brain-body unit. This seminar will cover emerging approaches to study the mind-mitochondria connection and present early attempts to map the distribution and diversity of mitochondria across brain tissue. In rodents, I will present convergent multimodal evidence anchored in enzyme activities, gene expression, and animal behavior that distinct behaviorally-relevant mitochondrial phenotypes exist across large-scale mouse brain networks. Extending these findings to the human brain, I will present a developing systematic biochemical and molecular map of mitochondrial variation across cortical and subcortical brain structures, representing a foundation to understand the origin of complex energy patterns that give rise to the human mind.
Neural codes for natural behaviors in the hippocampus of flying bat
How are the epileptogenesis clocks ticking?
The epileptogenesis process is associated with large-scale changes in gene expression, which contribute to the remodelling of brain networks permanently altering excitability. About 80% of the protein coding genes are under the influence of the circadian rhythms. These are 24-hour endogenous rhythms that determine a large number of daily changes in physiology and behavior in our bodies. In the brain, the master clock regulates a large number of pathways that are important during epileptogenesis and established-epilepsy, such as neurotransmission, synaptic homeostasis, inflammation, blood-brain barrier among others. In-depth mapping of the molecular basis of circadian timing in the brain is key for a complete understanding of the cellular and molecular events connecting genes to phenotypes.
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
The quest for brain identification
In the 17th century, physician Marcello Malpighi observed the existence of distinctive patterns of ridges and sweat glands on fingertips. This was a major breakthrough, and originated a long and continuing quest for ways to uniquely identify individuals based on fingerprints, a technique massively used until today. It is only in the past few years that technologies and methodologies have achieved high-quality measures of an individual’s brain to the extent that personality traits and behavior can be characterized. The concept of “fingerprints of the brain” is very novel and has been boosted thanks to a seminal publication by Finn et al. in 2015. They were among the firsts to show that an individual’s functional brain connectivity profile is both unique and reliable, similarly to a fingerprint, and that it is possible to identify an individual among a large group of subjects solely on the basis of her or his connectivity profile. Yet, the discovery of brain fingerprints opened up a plethora of new questions. In particular, what exactly is the information encoded in brain connectivity patterns that ultimately leads to correctly differentiating someone’s connectome from anybody else’s? In other words, what makes our brains unique? In this talk I am going to partially address these open questions while keeping a personal viewpoint on the subject. I will outline the main findings, discuss potential issues, and propose future directions in the quest for identifiability of human brain networks.
Brain-heart interactions at the edges of consciousness
Various clinical cases have provided evidence linking cardiovascular, neurological, and psychiatric disorders to changes in the brain-heart interaction. Our recent experimental evidence on patients with disorders of consciousness revealed that observing brain-heart interactions helps to detect residual consciousness, even in patients with absence of behavioral signs of consciousness. Those findings support hypotheses suggesting that visceral activity is involved in the neurobiology of consciousness and sum to the existing evidence in healthy participants in which the neural responses to heartbeats reveal perceptual and self-consciousness. Furthermore, the presence of non-linear, complex, and bidirectional communication between brain and heartbeat dynamics can provide further insights into the physiological state of the patient following severe brain injury. These developments on methodologies to analyze brain-heart interactions open new avenues for understanding neural functioning at a large-scale level, uncovering that peripheral bodily activity can influence brain homeostatic processes, cognition, and behavior.
Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Characterising Representations of Goal Obstructiveness and Uncertainty Across Behavior, Physiology, and Brain Activity Through a Video Game Paradigm
The nature of emotions and their neural underpinnings remain debated. Appraisal theories such as the component process model propose that the perception and evaluation of events (appraisal) is the key to eliciting the range of emotions we experience. Here we study whether the framework of appraisal theories provides a clearer account for the differentiation of emotional episodes and their functional organisation in the brain. We developed a stealth game to manipulate appraisals in a systematic yet immersive way. The interactive nature of video games heightens self-relevance through the experience of goal-directed action or reaction, evoking strong emotions. We show that our manipulations led to changes in behaviour, physiology and brain activations.
Sensory Consequences of Visual Actions
We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.
Connectome-based models of neurodegenerative disease
Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.
Modeling the Navigational Circuitry of the Fly
Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.
Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex
Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Brain-wide manifold-organized hierarchical encoding of behaviors in C. elegans
Bernstein Conference 2024
The cost of behavioral flexibility: a modeling study of reversal learning using a spiking neural network
Bernstein Conference 2024
Deep Brain Stimulation in the Globus Pallidus internus Promotes Habitual Behavior by Modulating Cortico-Thalamic Shortcuts and Basal Ganglia Plasticity
Bernstein Conference 2024
Evaluating Memory Behavior in Continual Learning using the Posterior in a Binary Bayesian Network
Bernstein Conference 2024
Exploring behavioral correlations with neuron activity through synaptic plasticity.
Bernstein Conference 2024
Human-like Behavior and Neural Representations Emerge in a Goal-driven Model of Overt Visual Search for Natural Objects
Bernstein Conference 2024
Joint coding of stimulus and behavior by flexible adjustments of sensory tuning in primary visual cortex
Bernstein Conference 2024
Physiological Implementation of Synaptic Plasticity at Behavioral Timescales Supports Computational Properties of Place Cell Formation
Bernstein Conference 2024
Arithmetic value representation for hierarchical behavior composition
COSYNE 2022
Behavior measures are predicted by how information is encoded in an individual's brain
COSYNE 2022
A cable-driven robotic eye for the study of oculomotor behaviors
COSYNE 2022
GABAA receptors modulate anxiety-like behavior through the central amygdala area in rats with higher physical activity
FENS Forum 2024
A control space for muscle state-dependent cortical influence during naturalistic motor behavior
COSYNE 2022
Deep neural network modeling of a visually-guided social behavior
COSYNE 2022
Defining the role of a locus coeruleus-orbitofrontal cortex circuit in behavioral flexibility
COSYNE 2022
Differential encoding of innate and learned behaviors in the sensorimotor striatum
COSYNE 2022
Dynamical systems analysis reveals a novel hypothalamic encoding of state in nodes controlling social behavior
COSYNE 2022
Impact of mitofusin 2 on accumbens-associated behaviors and underlying neurobiological mechanisms
FENS Forum 2024
Emergent behavior and neural dynamics in artificial agents tracking turbulent plumes
COSYNE 2022
Hippocampal representations during natural social behaviors in a bat colony
COSYNE 2022
Hippocampal representations during natural social behaviors in a bat colony
COSYNE 2022
Identifying changes in behavioral strategy from neural responses during evidence accumulation
COSYNE 2022
Identifying changes in behavioral strategy from neural responses during evidence accumulation
COSYNE 2022
Input-specific regulation of locus coeruleus activity for mouse maternal behavior
COSYNE 2022
Input-specific regulation of locus coeruleus activity for mouse maternal behavior
COSYNE 2022
Integration of infant sensory cues and internal states for maternal motivated behaviors
COSYNE 2022
Integration of infant sensory cues and internal states for maternal motivated behaviors
COSYNE 2022
Inter-areal patterned microstimulation selectively drives PFC activity and behavior in a memory task
COSYNE 2022
Inter-areal patterned microstimulation selectively drives PFC activity and behavior in a memory task
COSYNE 2022
Interpretable behavioral features have conserved neural representations across mice
COSYNE 2022
Interpretable behavioral features have conserved neural representations across mice
COSYNE 2022
A latent model of calcium activity outperforms alternatives at removing behavioral artifacts in two-channel calcium imaging
COSYNE 2022
A latent model of calcium activity outperforms alternatives at removing behavioral artifacts in two-channel calcium imaging
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
Optimal reward-rate in multi-task environments, and its consequences for behavior
COSYNE 2022
Neural Representations of Opponent Strategy Support the Adaptive Behavior of Recurrent Actor-Critics in a Competitive Game
COSYNE 2022
Neural Representations of Opponent Strategy Support the Adaptive Behavior of Recurrent Actor-Critics in a Competitive Game
COSYNE 2022
Optimal reward-rate in multi-task environments, and its consequences for behavior
COSYNE 2022
Behavioral and Neuronal Correlates of Exploration and Goal-Directed Navigation
Bernstein Conference 2024