Sensory
sensory
sensorimotor control, mouvement, touch, EEG
Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Dr Agostina Palmigiano
The Gatsby Unit invites applications for a postdoctoral training fellowship under Dr Agostina Palmigiano, focussed on developing theoretical approaches to investigate the mechanisms underlying sensory, motor or cognitive computations. You will be responsible for the primary execution of the project (with opportunities for co-supervision of students), presentation of results at conferences and seminars, and publication in suitable media. This post is initially funded for 2 years with the possibility of a one-year extension at the end of the period. For detailed information on the role and how to apply, please visit www.ucl.ac.uk/gatsby/vacancies under 'Research Fellow (Palmigiano group)'. Agostina will also be at COSYNE 2024 between 29 February and 5 March. Please get in touch to set up informal chats with her if interested!
Prof Li Zhaoping
Postdoctoral position in Human Psychophysics for understanding vision (m/f/d) – (TVöD Bund E13, 100%) The Department of Sensory and Sensorimotor Systems (PI Prof. Li Zhaoping) at the Max Planck Institute for Biological Cybernetics and at the University of Tübingen is currently looking for highly skilled and motivated individuals to work on projects aimed towards understanding visual attentional and perceptual processes using fMRI/MRI, TMS, EEG/MEG, and other relevant methodologies. The framework and motivation of the projects can be found at: https://www.lizhaoping.org/zhaoping/AGZL_HumanVisual.html The projects can involve, for example, visual search tasks, stereo vision tasks, visual illusions, and will be discussed during the application process. fMRI/MRI, TMS and/or EEG/MEG, methodologies can be used in combination with eye tracking, and other related methods as necessary. The postdoc will be working closely with the principal investigator and other members of Zhaoping's team when needed. Responsibilities: • Conduct and participate in research projects such as lab and equipment set up, data collection, data analysis, writing reports and papers, and presenting at scientific conferences • Participate in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures • Coordinate with the PI and other team members for strategies and project planning • Coordinate with the PI and other team members for project planning, and in supervision of student projects or teaching assistance for university courses in our field Who we are: We use a multidisciplinary approach to investigate sensory and sensory-motor transforms in the brain (www.lizhaoping.org). Our approaches consist of both theoretical and experimental techniques including human psychophysics, fMRI imaging, EEG, electrophysiology and computational modelling. One part of our group is located in the University, in the Centre for Integrative Neurosciences (CIN), and the other part is in the Max Planck Institute for Biological Cybernetics as the Department for Sensory and Sensorimotor Systems. You will have the opportunity to learn other skills in our multidisciplinary group and benefit from interactions with our colleagues in the university as well as internationally. This job opening is for the CIN or the MPI working group. The position (salary level TVöD-Bund E13, 100%) is for a duration of two years. Extension or a permanent contract after two years is possible depending on situations. We seek to raise the number of women in research and teaching and therefore urge qualified women to apply. Disabled persons will be preferred in case of equal qualification. Your application: The position is available immediately and will be open until filled. Preference will be given to applications received by Nov. 30, 2023. We look forward to receiving your application that includes (1) a cover letter, including a statement on roughly when you would like to start this position, (2) a motivation statement, (3) a CV, (4) names and contact details of three people for references, (5) if you have them, transcripts from your past and current education listing the courses taken and their grades, (6) if you have them, please also include copies of your degree certificates, (7) you may include a pdf file of your best publication(s), or other documents and information that you think could strengthen your application. Please use pdf files for these documents (and you may combine them into a single pdf file) and send to jobs.li@tuebingen.mpg.de, where also informal inquiries can be addressed. Please note that applications without complete information in (1)-(4) will not be considered, unless the cover letter includes an explanation and/or information about when the needed materials will be supplied. For further opportunities in our group, please visit www.lizhaoping.org/jobs.html
Prof. Li Zhaoping
The Department for Sensory and Sensorimotor Systems of the Max-Planck-Institute for Biological Cybernetics studies the processing of sensory information (visual, auditory, tactile, olfactory) in the brain and the use of this information for directing body movements and making cognitive decisions. The research is highly interdisciplinary and uses theoretical and experimental approaches in humans. Our methodologies include visual psychophysics, eye tracking, fMRI, EEG, TMS in humans. For more information, please visit the department website: www.lizhaoping.org We are currently looking for a Research Operation Assistant with Scientific Experience (m/f/d) 100% to join us at the next possible opportunity. The position: You will provide hardware, software and managerial support for a diverse set of brain and neuroscience research activities. This includes: • Computer and IT support of Windows and Linux systems • Programming and debugging of computer code, especially at the stage of setting up new equipment or new experimental platforms • Provide technical, administrative, and operational support in the research data taking and analysis process. (The position holder should have the ability to quickly learn the data taking processes involved in the labs.) • Responsibility and free decision for purchases of laboratory equipment out to tender and evaluation of quotes with final decision making • Hardware repairs and troubleshooting including consultation of manufacturers, deliverers and scientific staff • Equipment setting up, inventory and maintenance • Supervising and training of new equipment users • Setting up, updating and managing the database of knowledge and data from research projects, personnel and activities to ensure smooth transition from one to another team member Our department is interdisciplinary, with research activities including human visual psychophysics, eye tracking, fMRI, EEG, TMS. We are looking for a person with a broad technical knowledge base, who loves working in a scientific environment and who is curious, open-minded, and able to adapt and learn new skills and solve new problems quickly. The set of skills that the individual should either already have or can quickly learn includes: MATLAB/Psychotoolbox, Python/OpenCV, Julia/OpenGL, Java, graphics and display technologies, EEG equipment and similar, eye tracking, optics, electronics/controllers/sensors, Arduino/Raspberry Pi, etc. We offer: We offer highly interesting, challenging and varied tasks; you will work closely and collaboratively with scientists, students, programmers, administrative staff, and central IT and mechanical/electronic workshop support to help achieve the scientific goals of the department. A dedicated team awaits you in an international environment with regular opportunities for further education and training. The salary is paid in accordance with the collective agreement for the public sector (TVöD Bund), based on qualification and experience and will include social security benefits and additional fringe benefits in accordance with public service provisions. This position is initially limited to two years, with the possibility of extensions and a permanent contract. The Max Planck Society seeks to employ more handicapped people and strongly encourages them to apply. Furthermore, we actively support the compatibility of work and family life. The Max Planck Society also seeks to increase the number of women in leadership positions and strongly encourages qualified women to apply. The Max Planck Society strives for gender equality and diversity. Your application The position is available immediately and will be open until filled. Preference will be given to applications received by April 3rd, 2023. We look forward to receiving your application that includes a cover letter, your curriculum vitae, relevant certificates, and three names and contacts for reference letters electronically by e-mail to jobs.li@tuebingen.mpg.de, where informal inquiries can also be addressed to. Please note that incomplete applications will not be considered. For further opportunities in our group, please visit http://www.lizhaoping.org/jobs.html.
Prof. Li Zhaoping
The Department of Sensory and Sensorimotor Systems (PI Prof. Li Zhaoping) at the Max Planck Institute for Biological Cybernetics and at the University of Tübingen is currently looking for highly skilled and motivated individuals to work on projects aimed towards understanding visual attentional and perceptual processes using fMRI/MRI, TMS and/or EEG methodologies. The framework and motivation of the projects can be found at: https://www.lizhaoping.org/zhaoping/AGZL_HumanVisual.html. The projects can involve, for example, visual search tasks, stereo vision tasks, visual illusions, and will be discussed during the application process. fMRI/MRI, TMS and/or EEG methodologies can be used in combination with eye tracking, and other related methods as necessary. The postdoc will be working closely with the principal investigator and other members of Zhaoping's team when needed. Responsibilities: • Conduct and participate in research projects such as lab and equipment set up, data collection, data analysis, writing reports and papers, and presenting at scientific conferences. • Participate in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures. • Coordinate with the PI and other team members for strategies and project planning. • Coordinate with the PI and other team members for project planning, and in supervision of student projects or teaching assistance for university courses in our field. Who we are: We use a multidisciplinary approach to investigate sensory and sensory-motor transforms in the brain (www.lizhaoping.org). Our approaches consist of both theoretical and experimental techniques including human psychophysics, fMRI imaging, electrophysiology and computational modelling. One part of our group is located in the University, in the Centre for Integrative Neurosciences (CIN), and the other part is in the Max Planck Institute for Biological Cybernetics as the Department for Sensory and Sensorimotor Systems. You will have the opportunity to learn other skills in our multidisciplinary group and benefit from interactions with our colleagues in the university as well as internationally. This job opening is for the CIN or the MPI working group. The position (salary level TVöD-Bund E13, 100%) is for a duration of two years, and renewable to additional years. We seek to raise the number of women in research and teaching and therefore urge qualified women to apply. Disabled persons will be preferred in case of equal qualification. Your application: The position is available immediately and will be open until filled. Preference will be given to applications received by November 30th, 2022. We look forward to receiving your application that includes (1) a cover letter, including a statement on roughly when you would like to start this position, (2) a motivation statement, (3) a CV, (4) names and contact details of three people for references, (5) if you have them, transcripts from your past and current education listing the courses taken and their grades, (6) if you have them, please also include copies of your degree certificates, (7) you may include a pdf file of your best publication(s), or other documents and information that you think could strengthen your application. Please use pdf files for these documents (and you may combine them into a single pdf file) and send to jobs.li@tuebingen.mpg.de, where also informal inquiries can be addressed. Please note that applications without complete information in (1)-(4) will not be considered, unless the cover letter includes an explanation and/or information about when the needed materials will be supplied. For further opportunities in our group, please visit https://www.lizhaoping.org/jobs.html
Prof. Li Zhaoping
The Department of Sensory and Sensorimotor Systems (PI Prof. Li Zhaoping) at the Max Planck Institute for Biological Cybernetics and at the University of Tübingen is currently looking for highly skilled and motivated individuals to work on projects aimed towards understanding visual attentional and perceptual processes using fMRI/MRI, TMS and/or EEG methodologies. The framework and motivation of the projects can be found at https://www.lizhaoping.org/zhaoping/AGZL_HumanVisual.html. The projects can involve, for example, visual search tasks, stereo vision tasks, visual illusions, and will be discussed during the application process. fMRI/MRI, TMS and/or EEG methodologies can be used in combination with eye tracking, and other related methods as necessary. Responsibilities: • Conduct and participate in research projects such as lab and equipment set up, data collection, data analysis, writing reports and papers, and presenting at scientific conferences. • Participate in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures. • Participate in teaching assistance duties for university courses in our field. Who we are: We use a multidisciplinary approach to investigate sensory and sensory-motor transforms in the brain (www.lizhaoping.org). Our approaches consist of both theoretical and experimental techniques including human psychophysics, fMRI imaging, EEG, electrophysiology and computational modelling. One part of our group is located in the University, in the Centre for Integrative Neurosciences (CIN), and the other part is in the Max Planck Institute for Biological Cybernetics as the Department for Sensory and Sensorimotor Systems. You will have the opportunity to learn skills from other members of the group and benefit from multidisciplinary interactions, including with our collaborators locally and internationally. The PhD contract (TVöD-Bund E13, 65%) duration is for 3 years. We seek to raise the number of women in research and teaching and therefore urge qualified women to apply. Disabled persons will be preferred in case of equal qualification. Your application: The position is available immediately and will be open until filled. Preference will be given to applications received by November 30th, 2022. We look forward to receiving your application that includes (1) a cover letter, including a statement on roughly when you would like to start this position, (2) a motivation statement, (3) a CV, (4) names and contact details of three people for references, (5) transcripts from your past and current education listing the courses taken and their grades, (6) if you have them, please also include copies of your degree certificates, (7) if you have them, include a pdf file of your best publication(s), or other documents and information that you think could strengthen your application. Please use pdf files for these documents (and you may combine them into a single pdf file) and send to jobs.li@tuebingen.mpg.de, where also informal inquiries can be addressed. Please note that applications without complete information in (1)-(5) will not be considered, unless the cover letter includes an explanation and/or information about when the needed materials will be supplied. For further opportunities in our group, please visit https://www.lizhaoping.org/jobs.html
Prof. Li Zhaoping
The Department for Sensory and Sensorimotor Systems of the Max-Planck-Institute for Biological Cybernetics studies the processing of sensory information (visual, auditory, tactile, olfactory) in the brain and the use of this information for directing body movements and making cognitive decisions. The research is highly interdisciplinary and uses theoretical and experimental approaches in humans. Our methodologies include visual psychophysics, eye tracking, fMRI, EEG, TMS in humans. For more information, please visit the department website: www.lizhaoping.org We are currently looking for a Lab Mechatronics / Programmer/ Research and Admin Assistant (m/f/d) 100% to join us at the next possible opportunity. The position: You will provide hardware, software, and managerial support for a diverse set of brain and neuroscience research activities. This includes: • Computer and IT support of Windows and Linux systems • Programming and debugging of computer code, especially at the stage of setting up new equipment or new experimental platforms • Provide technical, administrative, and operational support in the research data taking process. (The position holder should have the ability to quickly learn the data taking processes involved in the labs.) • Hardware repairs and troubleshooting • Equipment inventory and maintenance • Supervising and training of new equipment users • Setting up, updating and managing the database of knowledge and data from research projects, personnel and activities Our department is interdisciplinary, with research activities including human visual psychophysics, eye tracking, fMRI, EEG, TMS. We are looking for a person with a broad technical knowledge base, who loves working in a scientific environment and who is curious, open-minded, and able to adapt and learn new skills and solve new problems quickly. The set of skills that the individual should either already have or can quickly learn includes: MATLAB/Psychotoolbox, Python/OpenCV, Julia/OpenGL, Java, graphics and display technologies, EEG equipment and similar, eye tracking, optics, electronics/controllers/sensors, Arduino/Raspberry Pi, etc. We offer: We offer highly interesting, challenging and varied tasks; you will work closely and collaboratively with scientists, students, programmers, administrative staff, and central IT and mechanical/electronic workshop support to help achieve the scientific goals of the department. A dedicated team awaits you in an international environment with regular opportunities for further education and training. The salary is paid in accordance with the collective agreement for the public sector (TVöD Bund), based on qualification and experience and will include social security benefits and additional fringe benefits in accordance with public service provisions. This position is initially limited to two years, with the possibility of extensions and a permanent contract. The Max Planck Society seeks to employ more handicapped people and strongly encourages them to apply. Furthermore, we actively support the compatibility of work and family life. The Max Planck Society also seeks to increase the number of women in leadership positions and strongly encourages qualified women to apply. The Max Planck Society strives for gender equality and diversity. Your application: The position is available immediately and will be open until filled. Preference will be given to applications received by September 30th, 2022. We look forward to receiving your application that includes a cover letter, your curriculum vitae, relevant certificates, and three names and contacts for reference letters electronically by e-mail to jobs.li@tuebingen.mpg.de, where informal inquiries can also be addressed to. Please note that incomplete applications will not be considered. For further opportunities in our group, please visit http://www.lizhaoping.org/jobs.html
Prof. Li Zhaoping
Postdoctoral position in Human Psychophysics with TMS and/or EEG (m/f/d) (TVöD-Bund E13, 100%) The Department of Sensory and Sensorimotor Systems (PI Prof. Li Zhaoping) at the Max Planck Institute for Biological Cybernetics and at the University of Tübingen is currently looking for highly skilled and motivated individuals to work on projects aimed towards understanding visual attentional and perceptual processes using TMS and/or EEG methodologies. The framework and motivation of the projects can be found at http://www.lizhaoping.org/zhaoping/AGZL_HumanVisual.html.The projects can involve, for example, visual search tasks, stereo vision tasks, visual illusions, and will be discussed during the application process. TMS and/or EEG methodologies can be used in combination with fMRI/MRI, eye tracking, and other related methods as necessary. The postdoc will be working closely with the principal investigator and other members of Zhaoping's team when needed. Responsibilities: • Conduct and participate in research projects such as lab and equipment set up, data collection, data analysis, writing reports and papers, and presenting at scientific conferences. • Participate in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures. • Coordinate with the PI and other team members for strategies and project planning. • Coordinate with the PI and other team members for project planning, and in supervision of student projects or teaching assistance for university courses in our field. Who we are: We use a multidisciplinary approach to investigate sensory and sensory-motor transforms in the brain (www.lizhaoping.org). Our approaches consist of both theoretical and experimental techniques including human psychophysics, fMRI imaging, electrophysiology and computational modelling. One part of our group is located in the University, in the Centre for Integrative Neurosciences (CIN), and the other part is in the Max Planck Institute for Biological Cybernetics as the Department for Sensory and Sensorimotor Systems. You will have the opportunity to learn other skills in our multidisciplinary group and benefit from interactions with our colleagues in the university as well as internationally. This job opening is for the CIN or the MPI working group. The position (salary level TVöD-Bund E13, 100%) is for a duration of two years, and renewable to additional years. We seek to raise the number of women in research and teaching and therefore urge qualified women to apply. Disabled persons will be preferred in case of equal qualification. Your application: The position is available immediately and will be open until filled. Preference will be given to applications received by June 5th, 2022. We look forward to receiving your application that includes a cover letter, your curriculum vitae, relevant certificates, and three names and contacts for reference letters) electronically through our job portal: https://jobs.tue.mpg.de/jobs/169. Informal inquiries can be addressed to jobs.li@tuebingen.mpg.de. Please note that incomplete applications will not be considered.
Prof. Li Zhaoping
The Department of Sensory and Sensorimotor Systems (PI Prof. Li Zhaoping) at the Max Planck Institute for Biological Cybernetics and at the University of Tübingen is currently looking for highly skilled and motivated individuals to work on projects aimed towards understanding visual attentional and perceptual processes using TMS and/or EEG methodologies. The framework and motivation of the projects can be found at http://www.lizhaoping.org/zhaoping/AGZL_HumanVisual.html. The projects can involve, for example, visual search tasks, stereo vision tasks, visual illusions, and will be discussed during the application process. TMS and/or EEG methodologies can be used in combination with fMRI/MRI, eye tracking, and other related methods as necessary. Responsibilities: • Conduct and participate in research projects such as lab and equipment set up, data collection, data analysis, writing reports and papers, and presenting at scientific conferences. • Participate in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures. • Participate in teaching assistance duties for university courses in our field. Who we are: We use a multidisciplinary approach to investigate sensory and sensory-motor transforms in the brain (www.lizhaoping.org). Our approaches consist of both theoretical and experimental techniques including human psychophysics, fMRI imaging, electrophysiology and computational modelling. One part of our group is located in the University, in the Centre for Integrative Neurosciences (CIN), and the other part is in the Max Planck Institute for Biological Cybernetics as the Department for Sensory and Sensorimotor Systems. You will have the opportunity to learn skills from other members of the group and benefit from multidisciplinary interactions, including with our collaborators locally and internationally. The PhD contract (TVöD-Bund E13, 65%) duration is for 3 years. We seek to raise the number of women in research and teaching and therefore urge qualified women to apply. Disabled persons will be preferred in case of equal qualification. Your application: The position is available immediately and will be open until filled. Preference will be given to applications received by June 5th, 2022. We look forward to receiving your application that includes a cover letter, your curriculum vitae, relevant certificates, and three names and contacts for reference letters) electronically through our job portal: https://jobs.tue.mpg.de/jobs/170 Informal inquiries can be addressed to jobs.li@tuebingen.mpg.de. Please note that incomplete applications will not be considered.
Top-down control of neocortical threat memory
Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.
From Spiking Predictive Coding to Learning Abstract Object Representation
In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Multisensory perception in the metaverse
Multisensory computations underlying flavor perception and food choice
Structural & Functional Neuroplasticity in Children with Hemiplegia
About 30% of children with cerebral palsy have congenital hemiplegia, resulting from periventricular white matter injury, which impairs the use of one hand and disrupts bimanual co-ordination. Congenital hemiplegia has a profound effect on each child's life and, thus, is of great importance to the public health. Changes in brain organization (neuroplasticity) often occur following periventricular white matter injury. These changes vary widely depending on the timing, location, and extent of the injury, as well as the functional system involved. Currently, we have limited knowledge of neuroplasticity in children with congenital hemiplegia. As a result, we provide rehabilitation treatment to these children almost blindly based exclusively on behavioral data. In this talk, I will present recent research evidence of my team on understanding neuroplasticity in children with congenital hemiplegia by using a multimodal neuroimaging approach that combines data from structural and functional neuroimaging methods. I will further present preliminary data regarding functional improvements of upper extremities motor and sensory functions as a result of rehabilitation with a robotic system that involves active participation of the child in a video-game setup. Our research is essential for the development of novel or improved neurological rehabilitation strategies for children with congenital hemiplegia.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Neural architectures: what are they good for anyway?
The brain has a highly complex structure in terms of cell types and wiring between different regions. What is it for, if anything? I'll start this talk by asking what might an answer to this question even look like given that we can't run an alternative universe where our brains are structured differently. (Preview: we can do this with models!) I'll then talk about some of our work in two areas: (1) does the modular structure of the brain contribute to specialisation of function? (2) how do different cell types and architectures contribute to multimodal sensory processing?
Where are you Moving? Assessing Precision, Accuracy, and Temporal Dynamics in Multisensory Heading Perception Using Continuous Psychophysics
Dimensionality reduction beyond neural subspaces
Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Mouse Motor Cortex Circuits and Roles in Oromanual Behavior
I’m interested in structure-function relationships in neural circuits and behavior, with a focus on motor and somatosensory areas of the mouse’s cortex involved in controlling forelimb movements. In one line of investigation, we take a bottom-up, cellularly oriented approach and use optogenetics, electrophysiology, and related slice-based methods to dissect cell-type-specific circuits of corticospinal and other neurons in forelimb motor cortex. In another, we take a top-down ethologically oriented approach and analyze the kinematics and cortical correlates of “oromanual” dexterity as mice handle food. I'll discuss recent progress on both fronts.
Rethinking Attention: Dynamic Prioritization
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Sensory tuning in neuronal movement commands
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Roles of inhibition in stabilizing and shaping the response of cortical networks
Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.
Time perception in film viewing as a function of film editing
Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.
Executive functions in the brain of deaf individuals – sensory and language effects
Executive functions are cognitive processes that allow us to plan, monitor and execute our goals. Using fMRI, we investigated how early deafness influences crossmodal plasticity and the organisation of executive functions in the adult human brain. Results from a range of visual executive function tasks (working memory, task switching, planning, inhibition) show that deaf individuals specifically recruit superior temporal “auditory” regions during task switching. Neural activity in auditory regions predicts behavioural performance during task switching in deaf individuals, highlighting the functional relevance of the observed cortical reorganisation. Furthermore, language grammatical skills were correlated with the level of activation and functional connectivity of fronto-parietal networks. Together, these findings show the interplay between sensory and language experience in the organisation of executive processing in the brain.
Ganzflicker: Using light-induced hallucinations to predict risk factors of psychosis
Rhythmic flashing light, or “Ganzflicker”, can elicit altered states of consciousness and hallucinations, bringing your mind’s eye out into the real world. What do you experience if you have a super mind’s eye, or none at all? In this talk, I will discuss how Ganzflicker has been used to simulate psychedelic experiences, how it can help us predict symptoms of psychosis, and even tap into the neural basis of hallucinations.
Predictive processing: a circuit approach to psychosis
Predictive processing is a computational framework that aims to explain how the brain processes sensory information by making predictions about the environment and minimizing prediction errors. It can also be used to explain some of the key symptoms of psychotic disorders such as schizophrenia. In my talk, I will provide an overview of our progress in this endeavor.
The Role of Spatial and Contextual Relations of real world objects in Interval Timing
In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Measures and models of multisensory integration in reaction times
First, a new measure of MI for reaction times is proposed that takes the entire RT distribution into account. Second, we present some recent developments in TWIN modeling, including a new proposal for the sound-induced flash illusion (SIFI).
Bayesian expectation in the perception of the timing of stimulus sequences
In the current virtual journal club Dr Di Luca will present findings from a series of psychophysical investigations where he measured sensitivity and bias in the perception of the timing of stimuli. He will present how improved detection with longer sequences and biases in reporting isochrony can be accounted for by optimal statistical predictions. Among his findings was also that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted to appear more regular. Such change depends on whether the context these sequences are presented is also regular. Dr Di Luca will present a Bayesian model for the combination of dynamically updated expectations, in the form of a priori probability, with incoming sensory information. These findings contribute to the understanding of how the brain processes temporal information to shape perceptual experiences.
Sensory Consequences of Visual Actions
We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.
Multisensory perception, learning, and memory
Note the later start time!
Modeling the Navigational Circuitry of the Fly
Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.
Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex
Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.
Movements and engagement during decision-making
When experts are immersed in a task, a natural assumption is that their brains prioritize task-related activity. Accordingly, most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. Surprisingly, we observed that during decision-making, the cortex-wide activity of multiple cell types is dominated by movements, especially “uninstructed movements”, that are spontaneously expressed. These observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity. To understand the relationship between these movements and decision-making, we examined the movements more closely. We tested whether the magnitude or the timing of the movements was correlated with decision-making performance. To do this, we partitioned movements into two groups: task-aligned movements that were well predicted by task events (such as the onset of the sensory stimulus or choice) and task independent movement (TIM) that occurred independently of task events. TIM had a reliable, inverse correlation with performance in head-restrained mice and freely moving rats. This hinted that the timing of spontaneous movements could indicate periods of disengagement. To confirm this, we compared TIM to the latent behavioral states recovered by a hidden Markov model with Bernoulli generalized linear model observations (GLM-HMM) and found these, again, to be inversely correlated. Finally, we examined the impact of these behavioral states on neural activity. Surprisingly, we found that the same movement impacts neural activity more strongly when animals are disengaged. An intriguing possibility is that these larger movement signals disrupt cognitive computations, leading to poor decision-making performance. Taken together, these observations argue that movements and cognitionare closely intertwined, even during expert decision-making.
Making Sense of Our Senses: Multisensory Processes across the Human Lifespan
Identifying mechanisms of cognitive computations from spikes
Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.
Multisensory integration in peripersonal space (PPS) for action, perception and consciousness
Note the later time in the USA!
Predictive processing in older adults: How does it shape perception and sensorimotor control?
Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception
To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
How what you do shapes what you see
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Prosody in the voice, face, and hands changes which words you hear
Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.
Richly structured reward predictions in dopaminergic learning circuits
Theories from reinforcement learning have been highly influential for interpreting neural activity in the biological circuits critical for animal and human learning. Central among these is the identification of phasic activity in dopamine neurons as a reward prediction error signal that drives learning in basal ganglia and prefrontal circuits. However, recent findings suggest that dopaminergic prediction error signals have access to complex, structured reward predictions and are sensitive to more properties of outcomes than learning theories with simple scalar value predictions might suggest. Here, I will present recent work in which we probed the identity-specific structure of reward prediction errors in an odor-guided choice task and found evidence for multiple predictive “threads” that segregate reward predictions, and reward prediction errors, according to the specific sensory features of anticipated outcomes. Our results point to an expanded class of neural reinforcement learning algorithms in which biological agents learn rich associative structure from their environment and leverage it to build reward predictions that include information about the specific, and perhaps idiosyncratic, features of available outcomes, using these to guide behavior in even quite simple reward learning tasks.
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
Signatures of criticality in efficient coding networks
The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related
Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum
Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.
Learning through the eyes and ears of a child
Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.
How the brain uses experience to construct its multisensory capabilities
This talk will not be recorded
A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time
Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.
The sense of agency as an explorative role in our perception and action
The sense of agency refers to the subjective feeling of controlling one's own behavior and, through them, external events. Why is this subjective feeling important for humans? Is it just a by-product of our actions? Previous studies have shown that the sense of agency can affect the intensity of sensory input because we predict the input from our motor intention. However, my research has found that the sense of agency plays more roles than just predictions. It enhances perceptual processes of sensory input and potentially helps to harvest more information about the link between the external world and the self. Furthermore, our recent research found both indirect and direct evidence that the sense of agency is important for people's exploratory behaviors, and this may be linked to proximal exploitations of one's control in the environment. In this talk, I will also introduce the paradigms we use to study the sense of agency as a result of perceptual processes, and our findings of individual differences in this sense and the implications.
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Self-perception: mechanosensation and beyond
Brain-organ communications play a crucial role in maintaining the body's physiological and psychological homeostasis, and are controlled by complex neural and hormonal systems, including the internal mechanosensory organs. However, the progress has been slow due to technical hurdles: the sensory neurons are deeply buried inside the body and are not readily accessible for direct observation, the projection patterns from different organs or body parts are complex rather than converging into dedicate brain regions, the coding principle cannot be directly adapted from that learned from conventional sensory pathways. Our lab apply the pipeline of "biophysics of receptors-cell biology of neurons-functionality of neural circuits-animal behaviors" to explore the molecular and neural mechanisms of self-perception. In the lab, we mainly focus on the following three questions: 1, The molecular and cellular basis for proprioception and interoception. 2, The circuit mechanisms of sensory coding and integration of internal and external information. 3, The function of interoception in regulating behavior homeostasis.
Behavioural Basis of Subjective Time Distortions
Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, duration, and distance in time of two sequentially-organized events—standard S, with constant duration, and comparison C, with duration varying trial-by-trial—are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer inter-stimulus interval (ISI) helps to counteract such serial distortion effect only when the constant S is in the first position, but not if the unpredictable C is in the first position. These results imply the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. Our results clarify the mechanisms generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, something akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.
Asymmetric signaling across the hierarchy of cytoarchitecture within the human connectome
Cortical variations in cytoarchitecture form a sensory-fugal axis that shapes regional profiles of extrinsic connectivity and is thought to guide signal propagation and integration across the cortical hierarchy. While neuroimaging work has shown that this axis constrains local properties of the human connectome, it remains unclear whether it also shapes the asymmetric signaling that arises from higher-order topology. Here, we used network control theory to examine the amount of energy required to propagate dynamics across the sensory-fugal axis. Our results revealed an asymmetry in this energy, indicating that bottom-up transitions were easier to complete compared to top-down. Supporting analyses demonstrated that asymmetries were underpinned by a connectome topology that is wired to support efficient bottom-up signaling. Lastly, we found that asymmetries correlated with differences in communicability and intrinsic neuronal time scales and lessened throughout youth. Our results show that cortical variation in cytoarchitecture may guide the formation of macroscopic connectome topology.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
PIEZO2 in somatosensory neurons coordinates gastrointestinal transit
The transit of food through the gastrointestinal tract is critical for nutrient absorption and survival, and the gastrointestinal tract has the ability to initiate motility reflexes triggered by luminal distention. This complex function depends on the crosstalk between extrinsic and intrinsic neuronal innervation within the intestine, as well as local specialized enteroendocrine cells. However, the molecular mechanisms and the subset of sensory neurons underlying the initiation and regulation of intestinal motility remain largely unknown. Here, we show that humans lacking PIEZO2 exhibit impaired bowel sensation and motility. Piezo2 in mouse dorsal root but not nodose ganglia is required to sense gut content, and this activity slows down food transit rates in the stomach, small intestine, and colon. Indeed, Piezo2 is directly required to detect colon distension in vivo. Our study unveils the mechanosensory mechanisms that regulate the transit of luminal contents throughout the gut, which is a critical process to ensure proper digestion, nutrient absorption, and waste removal. These findings set the foundation of future work to identify the highly regulated interactions between sensory neurons, enteric neurons and non- neuronal cells that control gastrointestinal motility.
Competition and integration of sensory signals in a deep reinforcement learning agent
Bernstein Conference 2024
Cortical feedback shapes high order structure of population activity to improve sensory coding
Bernstein Conference 2024
Efficient nonlinear receptive field estimation across processing stages of sensory systems
Bernstein Conference 2024
Homeostatic information transmission as a principle for sensory coding during movement
Bernstein Conference 2024
Information flow in the somatosensory system : From Mechanoreceptor to Cortex
Bernstein Conference 2024
Joint coding of stimulus and behavior by flexible adjustments of sensory tuning in primary visual cortex
Bernstein Conference 2024
Low-dimensional sensory representations early in development facilitate receptive field formation
Bernstein Conference 2024
Model Selection in Sensory Data Interpretation
Bernstein Conference 2024
Modulation of Spontaneous Activity Patterns in Developing Sensory Cortices via Inhibition
Bernstein Conference 2024
Neuronal Heterogeneity Enhances Sensory Integration and Processing
Bernstein Conference 2024
Non-feedforward architectures enable diverse multisensory computations
Bernstein Conference 2024
Recurrence in temporal multisensory processing
Bernstein Conference 2024
Evolution of neural activity in circuits bridging sensory and abstract knowledge
COSYNE 2022
Integration of infant sensory cues and internal states for maternal motivated behaviors
COSYNE 2022
Integration of infant sensory cues and internal states for maternal motivated behaviors
COSYNE 2022
Investigation of a multilevel multisensory circuit underlying female decision making in Drosophila
COSYNE 2022
Isolated correlates of somatosensory perception in the posterior mouse cortex
COSYNE 2022
Investigation of a multilevel multisensory circuit underlying female decision making in Drosophila
COSYNE 2022
Isolated correlates of somatosensory perception in the posterior mouse cortex
COSYNE 2022
Learning to combine sensory evidence and contextual priors under ambiguity
COSYNE 2022
Learning to combine sensory evidence and contextual priors under ambiguity
COSYNE 2022
Multiple stimulus features are encoded by single mechanosensory neurons in insect wings
COSYNE 2022
Multiple stimulus features are encoded by single mechanosensory neurons in insect wings
COSYNE 2022
Multi-task representations across human cortex transform along a sensory-to-motor hierarchy
COSYNE 2022
Multi-task representations across human cortex transform along a sensory-to-motor hierarchy
COSYNE 2022
The operating regime of primate sensory cortex
COSYNE 2022
The operating regime of primate sensory cortex
COSYNE 2022
A parallel channel of state-dependent sensory signaling by the cholinergic basal forebrain
COSYNE 2022
A parallel channel of state-dependent sensory signaling by the cholinergic basal forebrain
COSYNE 2022
Representation of sensory uncertainty by neuronal populations in macaque primary visual cortex
COSYNE 2022
Representation of sensory uncertainty by neuronal populations in macaque primary visual cortex
COSYNE 2022
Sensory specific modulation of neural variability facilitates perceptual inference
COSYNE 2022
Sensory feedback can drive adaptation in motor cortex and facilitate generalization
COSYNE 2022
Sensory feedback can drive adaptation in motor cortex and facilitate generalization
COSYNE 2022
Sensory specific modulation of neural variability facilitates perceptual inference
COSYNE 2022
Sensory tuning in neuronal movement commands
COSYNE 2022
Sensory tuning in neuronal movement commands
COSYNE 2022
Structured random receptive fields enable informative sensory encodings
COSYNE 2022
Structured random receptive fields enable informative sensory encodings
COSYNE 2022
Adversarial-inspired autoencoder framework for salient sensory feature extraction
Bernstein Conference 2024