Perception
perception
Prof. Shu-Chen Li
2-year 100% research associate position in developmental cognitive neuroscience and EEG research at TU Dresden, Germany At the Faculty of Psychology, the Chair of Lifespan Developmental Neuroscience offers a position as Research Associate (m/f/x) (subject to personal qualification employees are remunerated according to salary group E 13 TV-L), starting as soon as possible. The position is initially limited until September 30, 2025 with the option for extension. The period of employment is governed by the Fixed Term Research Contracts Act (Wissenschaftszeitvertragsgesetz - WissZeitVG). The position offers the chance to obtain further academic qualification. The Chair of Lifespan Developmental Neuroscience investigates neurocognitive mechanisms underlying perceptual, cognitive, and motivational development across the lifespan. The main themes of our research are neurofunctional mechanisms underlying lifespan development of memory, cognitive control, reward processing, decision making, and multisensory perception. We also pursue applied research to study effects of behavioral intervention, non-invasive brain stimulation, or digital technologies in enhancing functional plasticity for individuals of difference ages. We utilize a broad range of neurocognitive (e.g., EEG, fNIRs, fMRI, tDCS) and computational methods. The lab has several testing rooms and is equipped with multiple EEG (64-channel and 32-channel) and fNIRs systems, as well as eye-tracking and virtual-reality devices. The MRI scanner (3T) and TMS-device can be accessed through the university’s NeuroImaging Center. TUD is a university of excellence supported by the DFG, which offers outstanding research opportunities. Researchers in this chair are involved in large research consortium and cluster, such as the DFG SFB 940 „Volition and Cognitive Control“ and DFG EXC 2050 „Tactile Internet with Human-in-the-Loop“. Tasks: research in the field of lifespan developmental cognitive neuroscience. The research topics are subject to the fits between the candidate’s research interests, expertise, and ongoing projects in the chair, particularly the DFG-funded research project Tec4Tic; scientific teaching (1 bachelor- or master-level seminar per semester for students majoring psychology). Topics for the seminars should cover neurocognitive mechanism of cognitive, motivation, or perceptual development.
Belen Pardi, PhD
We are looking for a motivated and talented scientist to join the Neuronal Circuits for Memory and Perception - Pardi lab, at the Institute of Psychiatry and Neuroscience of Paris (IPNP) as a funded postdoc from ~mid 2023. The Pardi lab aims to understand how brain circuits use internal information to transform sensations into perceptual representations, and how this process is affected in psychiatric disorders. The successful applicant will investigate the function of auditory thalamo-cortical loops in these processes, by combining mouse behavior with in vivo 2-photon calcium imaging, in vitro electrophysiology and optogenetics.
Belen Pardi, PhD
We are looking for a motivated and talented scientist to join the Neuronal Circuits for Memory and Perception - Pardi lab, at the Institute of Psychiatry and Neuroscience of Paris (IPNP), as a 2-year funded postdoc, from ~mid 2023, with the possibility of extension. The Pardi lab aims to understand how brain circuits use internal information to transform sensations into perceptual representations, and how this process is affected in psychiatric disorders. The successful applicant will investigate the function of auditory thalamo-cortical loops in these processes, by combining mouse behavior with in vivo 2-photon calcium imaging, in vitro electrophysiology and optogenetics.
Dr Flavia Mancini
This is an opportunity for a highly creative and skilled pre-doctoral Research Assistant to join the dynamic and multidisciplinary research environment of the Computational and Biological Learning research group (https://www.cbl-cambridge.org/), Department of Engineering, University of Cambridge. We are looking for a Research Assistant to work on projects related to statistical learning and contextual inference in the human brain. We have a particular focus of learning of aversive states, as this has a strong clinical significance for chronic pain and mental health disorders. The RA will be supervised by Dr Flavia Mancini (MRC Career Development fellow, and Head of the Nox Lab www.noxlab.org), and is expected to collaborate with theoretical and experimental colleagues in Cambridge, Oxford and abroad. The post holder will be located in central Cambridge, Cambridgeshire, UK. As a general approach, we combine statistical learning tasks in humans, computational modelling (using Bayesian inference, reinforcement learning, deep learning and neural networks) with neuroimaging methods (including 7T fMRI). The successful candidate will strengthen this approach and be responsible for designing experiments, collecting and analysis behavioural and brain fMRI data using computational modelling techniques. The key responsibilities and duties are: Ideating and conducting research studies on statistical/aversive learning, combining behavioural tasks, computational modelling (using Bayesian inference, reinforcement learning, deep learning and/or neural networks) with fMRI in healthy volunteers and chronic pain patients. Disseminating research findings Maintaining and developing technical skills to expand their scientific potential ******* More info and to apply: https://www.jobs.cam.ac.uk/job/35905/
Professors Yale cohen and Jennifer groh
Yale Cohen (U. Penn; https://auditoryresearchlaboratory.weebly.com/) and Jennifer Groh (Duke U.; www.duke.edu/~jmgroh) seeks a full-time post-doctoral scholar. Our labs study visual, auditory, and multisensory processing in the brain using neurophysiological and computational techniques. We have a newly funded NIH grant to study the contribution of corticofugal connectivity in non-human primate models of auditory perception. The work will take place at the Penn site. This will be a full-time, 12-month renewable appointment. Salary will be commensurate with experience and consistent with NIH NRSA stipends. To apply, send your CV along with contact information for 2 referees to: compneuro@sas.upenn.edu. For questions, please contact Yale Cohen (ycohen@pennmedicine.upenn.edu). Applications will be considered on a rolling basis, and we anticipate a summer 2022 start date. Penn is an Affirmative Action / Equal Opportunity Employer committed to providing employment opportunity without regard to an individual’s age, color, disability, gender, gender expression, gender identity, genetic information, national origin, race, religion, sex, sexual orientation, or veteran status
Professor Fiona Newell
Applications are invited for the role of Research Assistant at the Institute of Neuroscience in Trinity College (TCIN) to work in the Multisensory Cognition Lab headed by Prof. Fiona Newell. The Multisensory Cognition lab is generally interested in all aspects of human perception based on vision, hearing and touch. The main project associated to this role is a collaboration with the TILDA project in Trinity College Dublin. For more information about TILDA please see here (https://tilda.tcd.ie/). The candidate will participate in regular lab and collaborator meetings, learn about diverse methodologies in the investigation of perception in older adults. The Research Assistant will join an existing team of PhD students, postdoctoral researchers and will have the opportunity to collaborate with colleagues within the Institute ofNeuroscience, University College Cork and the TILDA project. Standard Duties and Responsibilities of the Post The Research Assistant will be expected to support the administration and management of the project (e.g. Ethical approval, recruitment of participants, booking lab usage). They will also be required to help with the research, including data processing. The Research Assistant will also be involved in dissemination and outreach work, including maintaining the lab website, social media, and the organisation of a public event during the year.
Professor Fiona Newell
Applications are invited for the role of Research Assistant at the Institute of Neuroscience in Trinity College (TCIN) to work in the Multisensory Cognition Lab headed by Prof. Fiona Newell. The Multisensory Cognition lab is generally interested in all aspects of human perception based on vision, hearing and touch. The Research Assistant will join a project aimed at investigating object recognition in children and in adults. The research adopts a multidisciplinary approach involving cognitive neuroscience, statistical modelling, psychophysics and computer science, particularly Virtual Reality. The candidate will participate in regular lab and collaborator meetings, learn about diverse methodologies in perceptual science. The position is funded for 1 year with a possibility for continuation for another year. Successful candidates are expected to take up the position immediately, but ideally no later than March 2022. The Research Assistant will join a research team of PhD students, postdoctoral researchers and will have the opportunity to collaborate with colleagues within the Institute of Neuroscience and industrial partners. The group has dedicated laboratory facility equipped with state-of art facilities for behavioural testing, including eye tracking and VR technology (HTC Vive and Oculus). TCIN also houses a research-dedicated MRI scanner, accessible to all principal investigators and their groups. The Research Assistant will be expected to support the administration and management of the project (e.g. Ethical approval, project website, social media, recruitment of participants, setting up data storage protocols etc.). The will also be required to help with the research, including stimulus creation (i.e. collating and building a database of visual, haptic and auditory stimuli for experiments on multisensory perception), participant testing and data collection. The Research Assistant will also be involved in the initial stages of setting up and testing using an eye tracker (Tobii or Eyelink) and VR/AR apparatus (Oculus or HTC Vive) with other team members and collaborators.
Prof David Brang
We are seeking a full-time post-doctoral research fellow to study computational and neuroscientific models of perception and cognition. The research fellow will be jointly supervised by Dr. David Brang (https://sites.lsa.umich.edu/brang-lab/) and Zhongming Liu (https://libi.engin.umich.edu). The goal of this collaboration is to build computational models of cognitive and perceptual processes using data combined from electrocorticography (ECoG) and fMRI. The successful applicant will also have freedom to conduct additional research based on their interests, using a variety of methods -- ECoG, fMRI, DTI, lesion mapping, and EEG. The ideal start date is from spring to fall 2021 and the position is expected to last for at least two years, with the possibility of extension for subsequent years. We are also recruiting a Post-Doc for research on multisensory interactions (particularly how vision modulates speech perception) using Cognitive Neuroscience techniques or to help with our large-scale brain tumor collaboration with Shawn Hervey-Jumper at UCSF (https://herveyjumperlab.ucsf.edu). In this latter collaboration we collect iEEG (from ~50 patients/year) and lesion mapping data (from ~150 patients/year) in patients with a brain tumor to study sensory and cognitive functions in patients. The goals of this project are to better understand the physiology of tumors, study causal mechanisms of brain functions, and generalize iEEG/ECoG findings from epilepsy patients to a second patient population.
Melissa Caras
We are seeking a highly motivated applicant to join our team as a full-time research technician studying the neural basis of auditory perceptual learning. The successful candidate will be responsible for managing daily laboratory activities, including maintaining the animal colony, ordering supplies, preparing common use solutions, and overseeing lab safety compliance. In addition, the hired applicant will support ongoing projects in the lab by training and testing Mongolian gerbils on auditory detection and discrimination tasks, assisting with or performing survival surgeries, performing perfusions, and processing and imaging histological tissue. The candidate will have the opportunity to gain experience with a number of techniques, including in vivo electrophysiology, pharmacology, fiber photometry, operant conditioning, chemogenetics, and/or optogenetics. This position is an ideal fit for an individual looking to gain independent research experience before applying to graduate or medical school. This a one-year position, with the option to renew for a second year.
Dr. Melissa Caras
We are looking for a postdoctoral fellow to study neuromodulatory mechanisms supporting auditory perceptual learning in Mongolian gerbils. The successful applicant will measure and manipulate neuromodulatory release, and assess its impact on cortical activity in freely-moving animals engaged in auditory detection tasks. A variety of techniques will be used, including in vivo multichannel electrophysiology and pharmacology, fiber photometry, novel genetically-encoded fluorescent biosensors, chemogenetics and/or optogenetics. The candidate will be highly involved in all aspects of the research, from design to publication, and will additionally have the opportunity to mentor graduate and undergraduate students.
Prof. Jim Torresen
The goal of the position is to create prediction methods for proactive planning of future robot actions and to design robot acting mechanisms for adaptive response ranging from quick and intuitive to slower well-reasoned. We combine sensing across multiple modalities with learned knowledge to predict outcomes and choose the best actions. The goal is to transfer these skills to human-robot interaction in home scenarios, including the support of everyday tasks and physical rehabilitation. It is relevant to work with implementation and research within robot perception and control for the robot tasks. User studies through human-robot interaction experiments are to be performed.
N/A
The interdisciplinary M.Sc. Program in Cognitive Systems combines courses from neural/connectionist and symbolic Artificial Intelligence, Machine Learning, and Cognitive Psychology, to explore the fundamentals of perception, attention, learning, mental representation, and reasoning, in humans and machines. The M.Sc. Program is offered jointly by two public universities in Cyprus (the Open University of Cyprus and the University of Cyprus) and has been accredited by the national Quality Assurance Agency. The program is directed by academics from the participating universities, and courses are offered in English via distance learning by an international team of instructors.
N/A
The position holder will be a member of the Hessian Center for Artificial Intelligence - hessian.AI and provides research at the Center and will also be a member of the Centre for Cognitive Science. The scientific focus of the position is on the computational and algorithmic modeling of behavioral data to understand the human mind. Exemplary research topics include computational level models of perception, cognition, decision making, action, and learning as well as extended behavior and social interactions in humans, algorithmic models that are able to simulate, predict, and explain human behavior, model-driven behavioral research on human cognition. The professorship is expected to strengthen the Hessian Center for Artificial Intelligence and TU Darmstadt’s Human Science department’s research focus on Cognitive Science. Depending on the candidate’s profile there is the opportunity to participate in joint research projects currently running at TU Darmstadt. This in particular includes the state funded cluster projects “The Adaptive Mind (TAM)” and “The Third Wave of Artificial Intelligence (3AI)”. In addition to excellent scientific credentials, we seek a strong commitment to teaching in the department’s Bachelor and Masters programs in Cognitive Science. Experience in attracting third-party funding as well as participation in academic governance is expected.
Prof. Shu-Chen Li
The Chair of Lifespan Developmental Neuroscience investigates neurocognitive mechanisms underlying perceptual, cognitive, and motivational development across the lifespan. The main themes of our research are neurofunctional mechanisms underlying lifespan development of episodic and spatial memory, cognitive control, reward processing, decision making, perception and action. We also pursue applied research to study effects of behavioral intervention, non-invasive brain stimulation, or digital technologies in enhancing functional plasticity for individuals of difference ages. We utilize a broad range of neurocognitive (e.g., EEG, fNIRs, fMRI, tDCS) and computational methods. The here announced position is embedded in a newly established research group funded by the DFG (FOR5429), with a focus on modulating brain networks for memory and learning by using focalized transcranial electrical stimulation (tES). The subproject with which this position is associated will study effects of focalized tES on value-based sequential learning at the behavioral and brain levels in adults. The data collection for this subproject will mainly be carried out at the Berlin site (Center for Cognitive Neuroscience, FU Berlin).
Prof. Jim Torresen
The goal of the position is to create prediction methods for proactive planning of future robot actions and to design robot acting mechanisms for adaptive response ranging from quick and intuitive to slower well-reasoned. We combine sensing across multiple modalities with learned knowledge to predict outcomes and choose the best actions. The goal is to transfer these skills to human-robot interaction in home scenarios, including the support of everyday tasks and physical rehabilitation. Thus, it is relevant to work with implementation and research within robot perception and control for the robot tasks. User studies through human-robot interaction experiments are to be performed. A PhD fellow and a researcher are already hired for the project and will complement in performing the above outlined research.
N/A
We are seeking an outstanding researcher with expertise in computational or mathematical psychology to join the Complex Human Data Hub and contribute to the school’s research and teaching program. The CHDH has areas of strength in memory, perception, categorization, decision-making, language, cultural evolution, and social network analysis. We welcome applicants from all areas of mathematical psychology, computational cognitive science, computational behavioural science and computational social science and are especially interested in applicants who can build upon or complement our existing strengths. We particularly encourage applicants whose theoretical approaches and methodologies connect with social network processes and/or culture and cognition, or whose work links individual psychological processes to broader societal processes. We especially encourage women and other minorities to apply.
Prof. Jim Torresen
The goal of the position is to create prediction methods for proactive planning of future robot actions and to design robot acting mechanisms for adaptive response ranging from quick and intuitive to slower well-reasoned. We combine sensing across multiple modalities with learned knowledge to predict outcomes and choose the best actions. The goal is to transfer these skills to human-robot interaction in home scenarios, including the support of everyday tasks and physical rehabilitation. Thus, it is relevant to work with implementation and research within robot perception and control for the robot tasks. User studies through human-robot interaction experiments are to be performed. A PhD fellow and a researcher are already hired for the project and will complement in performing the above outlined research.
Boris Gutkin
A three-year post-doctoral position in theoretical neuroscience is open to explore the mechanisms of interaction between interoceptive cardiac and exteroceptive tactile inputs at the cortical level. We aim to develop and validate a computational model of cardiac and of a somatosensory cortical circuit dynamics in order to determine the conditions under which interactions between exteroceptive and interoceptive inputs occur and which underlying mechanism (e.g., phase-resetting, gating, phasic arousal,..) best explain experimental data. The postdoctoral fellow will be based at the Group for Neural Theory at LNC2, in Boris Gutkin’s team with strong interactions with Catherine Tallon-Baudry’s team. LNC2 is located in the center of Paris within the Cognitive Science Department at Ecole Normale Supérieure, with numerous opportunities to interact with the Paris scientific community at large, in a stimulating and supportive work environment. Group for Neural Theory provides a rich environment and local community for theoretical neuroscience. Lab life is in English, speaking French is not a requirement. Salary according to experience and French rules. Starting date is first semester 2024.
Boris Gutkin, Catherine Tallon-Baudry
A three-year post-doctoral position in theoretical neuroscience is open to explore the mechanisms of interaction between interoceptive cardiac and exteroceptive tactile inputs at the cortical level. We aim to develop data-based computational models of cardiac and somatosensory cortical circuit dynamics. Building on these models we will determine the conditions under which interactions between exteroceptive and interoceptive inputs occur and which underlying mechanisms (e.g., phase-resetting, gating, phasic arousal,..) best explain experimental data.
Alona Fyshe
The Department of Psychology, University of Alberta, invites applications for a tenure-track position at the rank of Assistant Professor in Artificial Intelligence and Biological Cognition to commence with a start date as early as July 1, 2024. Exceptional candidates might be considered for hiring at the rank of Associate Professor. The position is part of a cluster hire in the intersection of AI/ML and other areas of research excellence within the University of Alberta that include Health, Energy, and Indigenous Initiatives in health and humanities, among others. The successful candidate will become an Amii Fellow, joining a highly collegial institute of world-class Artificial Intelligence and Machine Learning researchers, and will have access to Amii internal funding resources, administrative support, and a highly collaborative environment. The successful candidate will be nominated for a Canada CIFAR Artificial Intelligence (CCAI) Chair, by the Amii, which includes research funding for at least five years.
Elisa Raffaella Ferre
The School of Psychological Science, Birkbeck University of London is seeking 2 open-ended Lecturers (tenure track Assistant Professors) with a focus on computational modelling and psychological processes such as Cognitive Science or Computational Cognitive Neuroscience. The successful candidates will have an emerging research track record in all areas of cognitive neuroscience, particularly with practical experience of neuroimaging and fMRI, and/or cognitive science, including experimental cognitive psychology and computational modelling. Their research interests should align with the department's existing research themes: Perception, Attention, Action and Emotion; Cognitive Computational Modelling; Brain and Cognitive Development; and Health and Lived Experience.
Brad Wyble
The Department of Psychology at The Pennsylvania State University, University Park, PA, invites applications for a full-time Assistant or Associate Professor of Cognitive Psychology with anticipated start date of August, 2025. Areas of specialization within cognitive psychology are open and may include (but are not limited to) such topics as cognitive control, creativity, computational approaches and modelling, motor control, language science, memory, attention, perception, and decision making. A record of collaboration is desirable for both ranks. Substantial collaboration opportunities exist within the department that align with the department’s cross-cutting research themes and across campus. Current faculty in the cognitive area are active in units including the Center for Language Sciences, the Social Life and Engineering Sciences Imaging Center, the Center for Healthy Aging, the Center for Brain, Behavior, and Cognition and the Applied Research Lab. Responsibilities of the Assistant or Associate Professor of Cognitive Psychology include maintaining a strong record of publications in top outlets. This position will include resident instruction at the undergraduate and graduate level and normal university service, based on the candidate’s qualifications. A Ph.D. in Psychology or related field is required by the appointment date for both ranks. Candidates for the tenure-track Assistant Professor of Cognitive Psychology position must have demonstrated ability as a researcher, scholar, and teacher in a relevant field and have evidence of growth in scholarly achievement. Duties will involve a combination of teaching, research, and service, based on the candidate’s qualifications. Candidates for the tenure-track Associate Professor of Cognitive Psychology position must have demonstrated excellence as a researcher, scholar, and teacher in a relevant field and have an established reputation in scholarly achievement. Duties will involve a combination of teaching, research, and service, based on the candidate’s qualifications. The ideal candidate will have a strong record of publications in top outlets and have a history of or potential for external funding. In addition, successful candidates must either have demonstrated a commitment to building an inclusive, equitable, and diverse campus community, or describe one or more ways they would envision doing so, given the opportunity. Review of applications will begin immediately and will continue until the position is filled. Interested candidates should submit an online application at Penn State’s Job Posting Board, and should upload the following application materials electronically: (1) a Cover letter of application, (2) Concise statements of research and teaching interests, (3) a CV and (4) three selected (re)prints. System limitations allow for a total of 5 documents (5mb per document) as part of your application. Please combine materials to meet the 5-document limit. In addition, please arrange to have three letters of recommendation sent electronically to PsychApplications@psu.edu with the subject line: “Cognitive Psychology” Questions regarding the application process can be emailed to PsychApplications@psu.edu and questions regarding the position can be sent to the search chair: cogsearch@psu.edu. The Pennsylvania State University is committed to and accountable for advancing diversity, equity, and inclusion in all of its forms. We embrace individual uniqueness, foster a culture of inclusion that supports both broad and specific diversity initiatives, leverage the educational and institutional benefits of diversity, and engage all individuals to help them thrive. We value inclusion as a core strength and an essential element of our public service mission. Penn State offers competitive benefits to full-time employees, including medical, dental, vision, and retirement plans, in addition to 75% tuition discounts (including for a spouse and dependent children up to the age of 26) and paid holidays.
Prof. Angela Yu
Prof. Angela Yu recently moved from UCSD to TU Darmstadt as the Alexander von Humboldt AI Professor, and has a number of PhD and postdoc positions available in her growing “Computational Modeling of Intelligent Systems” research group. Applications are solicited from highly motivated and qualified candidates, who are interested in interdisciplinary research at the intersection of natural and artificial intelligence. Prof. Yu’s group uses mathematically rigorous and algorithmically diverse tools to understand the nature of representation and computations that give rise to intelligent behavior. There is a fair amount of flexibility in the actual choice of project, as long as the project excites both the candidate and Prof. Yu. For example, Prof. Yu is currently interested in investigating scientific questions such as: How is socio-emotional intelligence similar or different from cognitive intelligence? Is there a fundamental tradeoff, given the prevalence of autism among scientists and engineers? How can AI be taught socio-emotional intelligence? How are artificial intelligence (e.g. as demonstrated by large language models) and natural intelligence (e.g. as measured by IQ tests) similar or different in their underlying representation or computations? What roles do intrinsic motivations such as curiosity and computational efficiency play in intelligent systems? How can insights about artificial intelligence improve the understanding and augmentation of human intelligence? Are capacity limitations with respect to attention and working memory a feature or a bug in the brain? How can AI system be enhanced by attention or WM? More broadly, Prof. Yu’s group employs and develops diverse machine learning and mathematical tools, e.g. Bayesian statistical modeling, control theory, reinforcement learning, artificial NN, and information theory, to explain various aspects of cognition important for intelligence: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions. Participants who have experience with two or more of the technical areas, and/or one or more of the application areas, are highly encouraged to apply. As part of the Centre for Cognitive Science at TU Darmstadt, the Hessian AI Center, as well as the Computer Science Department, Prof. Yu’s group members are encouraged and expected to collaborate extensively with preeminent researchers in cognitive science and AI, both nearby and internationally. All positions will be based at TU Darmstadt, Germany. Starting dates for the positions are flexible. Salaries are commensurate with experience and expertise, and highly competitive with respect to U.S. and European standards. The working language in the group and within the larger academic community is English. Fluency in German is not required; the university provides free German lessons for interested scientific staff.
Chris Eliasmith
The postdoctoral position will be hosted in the CNRG, with a principal focus on neural modeling to build the next version of the Spaun brain model, the world’s largest functional brain model. The project integrates spiking deep neural networks, motor control, probabilistic inference, navigation, perception and cognition to develop a state-of-the-art, large-scale, spiking, whole-brain model. Applicants should have a PhD, with demonstrated skills in at least one of those areas and a willingness to learn about the others. This project leverages the CNRG’s existing expertise in using neural networks for large-scale brain modeling, originally demonstrated in 2012 with the first version of Spaun. A subsequent version in 2018 significantly extended performance. The latest version currently being built by the CNRG will again break new barriers in the scale and sophistication of whole brain models. Unlike past models, it will be embedded in a sophisticated 3D environment, yet retain the ability to perform a wide variety of tasks, from simple perceptual and motor tasks to challenging intelligence tests. Overall, the long-term goal of the project is to advance the state-of-the-art in large-scale brain models.
N/A
We are looking for a highly motivated PhD student to study neural mechanisms of high-dimensional visual category learning. The lab generally seeks to understand the cortical basis and computational principles of perception and experience-dependent plasticity in the brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in rodents and non-human primates, and fMRI and ECoG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The PhD student will have access to a new imaging center with a dedicated 3T research scanner, electrophysiology, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan. The Department of Cognitive Neurobiology of Caspar Schwiedrzik at Ruhr-University Bochum is looking for an outstanding PhD student interested in studying the neural basis of mental flexibility. The project investigates neural mechanisms of high-dimensional visual category learning, utilizing functional magnetic resonance imaging (fMRI) in combination with computational modelling and behavioral testing in humans. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The PhD student’s project will focus on developing new category learning paradigms to investigate the neural basis of flexible multi-task learning in humans using fMRI. In addition, the PhD student will cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in non-human primates.
Top-down control of neocortical threat memory
Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.
Organization of thalamic networks and mechanisms of dysfunction in schizophrenia and autism
Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Developmental and evolutionary perspectives on thalamic function
Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.
The Unconscious Eye: What Involuntary Eye Movements Reveal About Brain Processing
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Multisensory perception in the metaverse
The hippocampus, visual perception and visual memory
Reading Scenes
Multisensory computations underlying flavor perception and food choice
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
The representation of speech conversations in the human auditory cortex
Making Sense of Sounds: Cortical Mechanisms for Dynamic Auditory Perception
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Where are you Moving? Assessing Precision, Accuracy, and Temporal Dynamics in Multisensory Heading Perception Using Continuous Psychophysics
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Dynamics of braille letter perception in blind readers
Enhancing Real-World Event Memory
Memory is essential for shaping how we interpret the world, plan for the future, and understand ourselves, yet effective cognitive interventions for real-world episodic memory loss remain scarce. This talk introduces HippoCamera, a smartphone-based intervention inspired by how the brain supports memory, designed to enhance real-world episodic recollection by replaying high-fidelity autobiographical cues. It will showcase how our approach improves memory, mood, and hippocampal activity while uncovering links between memory distinctiveness, well-being, and the perception of time.
Guiding Visual Attention in Dynamic Scenes
Rethinking Attention: Dynamic Prioritization
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Mind Perception and Behaviour: A Study of Quantitative and Qualitative Effects
Perceptual illusions we understand well, and illusions which aren’t really illusions
Imagining and seeing: two faces of prosopagnosia
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy
In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.
Perception in Autism: Testing Recent Bayesian Inference Accounts
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Distinctive features of experiential time: Duration, speed and event density
William James’s use of “time in passing” and “stream of thoughts” may be two sides of the same coin that emerge from the brain segmenting the continuous flow of information into discrete events. Departing from that idea, we investigated how the content of a realistic scene impacts two distinct temporal experiences: the felt duration and the speed of the passage of time. I will present you the results from an online study in which we used a well-established experimental paradigm, the temporal bisection task, which we extended to passage of time judgments. 164 participants classified seconds-long videos of naturalistic scenes as short or long (duration), or slow or fast (passage of time). Videos contained a varying number and type of events. We found that a large number of events lengthened subjective duration and accelerated the felt passage of time. Surprisingly, participants were also faster at estimating their felt passage of time compared to duration. The perception of duration heavily depended on objective duration, whereas the felt passage of time scaled with the rate of change. Altogether, our results support a possible dissociation of the mechanisms underlying the two temporal experiences.
Time perception in film viewing as a function of film editing
Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.
Ganzflicker: Using light-induced hallucinations to predict risk factors of psychosis
Rhythmic flashing light, or “Ganzflicker”, can elicit altered states of consciousness and hallucinations, bringing your mind’s eye out into the real world. What do you experience if you have a super mind’s eye, or none at all? In this talk, I will discuss how Ganzflicker has been used to simulate psychedelic experiences, how it can help us predict symptoms of psychosis, and even tap into the neural basis of hallucinations.
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
Deepfake Detection in Super-Recognizers and Police Officers
Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.
The Role of Spatial and Contextual Relations of real world objects in Interval Timing
In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Recognizing Faces: Insights from Group and Individual Differences
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
Characterising Representations of Goal Obstructiveness and Uncertainty Across Behavior, Physiology, and Brain Activity Through a Video Game Paradigm
The nature of emotions and their neural underpinnings remain debated. Appraisal theories such as the component process model propose that the perception and evaluation of events (appraisal) is the key to eliciting the range of emotions we experience. Here we study whether the framework of appraisal theories provides a clearer account for the differentiation of emotional episodes and their functional organisation in the brain. We developed a stealth game to manipulate appraisals in a systematic yet immersive way. The interactive nature of video games heightens self-relevance through the experience of goal-directed action or reaction, evoking strong emotions. We show that our manipulations led to changes in behaviour, physiology and brain activations.
Bayesian expectation in the perception of the timing of stimulus sequences
In the current virtual journal club Dr Di Luca will present findings from a series of psychophysical investigations where he measured sensitivity and bias in the perception of the timing of stimuli. He will present how improved detection with longer sequences and biases in reporting isochrony can be accounted for by optimal statistical predictions. Among his findings was also that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted to appear more regular. Such change depends on whether the context these sequences are presented is also regular. Dr Di Luca will present a Bayesian model for the combination of dynamically updated expectations, in the form of a priori probability, with incoming sensory information. These findings contribute to the understanding of how the brain processes temporal information to shape perceptual experiences.
Sensory Consequences of Visual Actions
We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Multisensory perception, learning, and memory
Note the later start time!
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Varying the Effectiveness of Scene Context
Perceptions of responsiveness and rejection in romantic relationships. What are the implications for individuals and relationship functioning?
From birth, human beings need to be embedded into social ties to function best, because other individuals can provide us with a sense of belonging, which is a fundamental human need. One of the closest bonds we build throughout our life is with our intimate partners. When the relationship involves intimacy and when both partners accept and support each other’s needs and goals (through perceived responsiveness) individuals experience an increase in relationship satisfaction as well as physical and mental well-being. However, feeling rejected by a partner may impair the feeling of connectedness and belonging, and affect emotional and behavioural responses. When we perceive our partner to be responsive to our needs or desires, in turn we naturally strive to respond positively and adequately to our partner’s needs and desires. This implies that individuals are interdependent, and changes in one partner prompt changes in the other. Evidence suggests that partners regulate themselves and co-regulate each other in their emotional, psychological, and physiological responses. However, such processes may threaten the relationship when partners face stressful situations or interactions, like the transition to parenthood or rejection. Therefore, in this presentation, I will provide evidence for the role of perceptions of being accepted or rejected by a significant other on individual and relationship functioning, while considering the contextual settings. The three studies presented here explore romantic relationships, and how perceptions of rejection and responsiveness from the partner impact both individuals, their physiological and their emotional responses, as well as their relationship dynamics.
Multisensory integration in peripersonal space (PPS) for action, perception and consciousness
Note the later time in the USA!
Predictive processing in older adults: How does it shape perception and sensorimotor control?
Visual-vestibular cue comparison for perception of environmental stationarity
Note the later time!
Vocal emotion perception at millisecond speed
The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.
Generating parallel representations of position and identity in the olfactory system
Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception
To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.
The contribution of mental face representations to individual face processing abilities
People largely differ with respect to how well they can learn, memorize, and perceive faces. In this talk, I address two potential sources of variation. One factor might be people’s ability to adapt their perception to the kind of faces they are currently exposed to. For instance, some studies report that those who show larger adaptation effects are also better at performing face learning and memory tasks. Another factor might be people’s sensitivity to perceive fine differences between similar-looking faces. In fact, one study shows that the brain of good performers in a face memory task shows larger neural differences between similar-looking faces. Capitalizing on this body of evidence, I present a behavioural study where I explore the relationship between people’s perceptual adaptability and sensitivity and their individual face processing performance.
Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?
Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.
Vision for Real-Time Interactions with Objects and People
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
The development of visual experience
Vision and visual cognition is experience-dependent with likely multiple sensitive periods, but we know very little about statistics of visual experience at the scale of everyday life and how they might change with development. By traditional assumptions, the world at the massive scale of daily life presents pretty much the same visual statistics to all perceivers. I will present an overview our work on ego-centric vision showing that this is not the case. The momentary image received at the eye is spatially selective, dependent on the location, posture and behavior of the perceiver. If a perceiver’s location, possible postures and/or preferences for looking at some kinds of scenes over others are constrained, then their sampling of images from the world and thus the visual statistics at the scale of daily life could be biased. I will present evidence with respect to both low-level and higher level visual statistics about the developmental changes in the visual input over the first 18 months post-birth.
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Bayesian inference and arousal modulation in spatial perception to mitigate stochasticity and volatility
Bernstein Conference 2024
Computational mechanisms of odor perception and representational drift in rodent olfactory systems
Bernstein Conference 2024
Dynamic perception in volatile environments: How relevant is the prior?
Bernstein Conference 2024
Feature-based letter perception – A neurocognitive plausible, transparent model approach
Bernstein Conference 2024
Modeling spatial and temporal attractive and repulsive biases in perception
Bernstein Conference 2024
Awake perception is associated with dedicated neuronal assemblies in cerebral cortex
COSYNE 2022
Causal inference can explain hierarchical motion perception and is reflected in neural responses in MT
COSYNE 2022
The interplay between prediction and integration processes in human perception
COSYNE 2022
The interplay between prediction and integration processes in human perception
COSYNE 2022
Isolated correlates of somatosensory perception in the posterior mouse cortex
COSYNE 2022
Isolated correlates of somatosensory perception in the posterior mouse cortex
COSYNE 2022
Structure in motion: visual motion perception as online hierarchical inference
COSYNE 2022
Structure in motion: visual motion perception as online hierarchical inference
COSYNE 2022
Beyond perception: the sensory cortex as an associative engine during goal-directed learning
COSYNE 2023
Dissecting cortical and subcortical contributions to perception with white noise optogenetic inhibition
COSYNE 2023
Divisive normalization as a mechanism for hierarchical causal inference in motion perception
COSYNE 2023
Many perception tasks are highly redundant functions of their input data
COSYNE 2025
Mapping social perception to social behavior using artificial neural networks
COSYNE 2025
Active tool-use training in near and far distances does not change time perception in peripersonal or far space
FENS Forum 2024
Association of hallucinogen persisting perception disorder with trait neuroticism and mental health symptoms
FENS Forum 2024
Bayesian inference during implicit perceptual belief updating in dynamic auditory perception
FENS Forum 2024
Bayesian perceptual adaptation in auditory motion perception: A multimodal approach with EEG and pupillometry
FENS Forum 2024
Brainwide transformation of neural signals underlying perception
FENS Forum 2024
A circuit mechanism for hunger-state dependent shifts in perception in the pond snail Lymnaea stagnalis
FENS Forum 2024
Community-regulated ethics: Perception and resolution of ethical conflicts by online communities
FENS Forum 2024
Impact of musical experience on music perception in the elderly
FENS Forum 2024
Distinct effects of spatial summation and lateral inhibition in cold and warm perception
FENS Forum 2024
Does the perception of gravitational orientation, variations in the subject's position, influence binocular fusion?
FENS Forum 2024
Dynamic perception in volatile environments: How relevant is the past when predicting the future?
FENS Forum 2024
Early cortical network deficits underlying abnormal stimulus perception in Shank3b+/- mice
FENS Forum 2024
The effects and interactions of top-down influences on speech perception
FENS Forum 2024
Electrophysiologic, transcriptomic, and morphologic plasticity of spinal inhibitory neurons to decipher atypical mechanosensory perception in Autism Spectrum Disorder
FENS Forum 2024
Fear-dependent brain state changes in perception and sensory representation in larvae zebrafish
FENS Forum 2024
fMRI mapping of brain circuits during simple sound perception by awake rats
FENS Forum 2024
Impact of Alzheimer’s disease on non-visual light perception, suprachiasmatic nucleus connectivity, and sleep regulation
FENS Forum 2024
The impact of virtual reality on postoperative cognitive impairment and pain perception after surgery
FENS Forum 2024
Influence of expectations on pain perception: Evidence for predictive coding
FENS Forum 2024
An EEG investigation for individual differences in time perception: Unraveling neural dynamics through serial dependency
FENS Forum 2024
Modulation of neuropathic pain and tactile perception in spinal cord injury during an exoskeleton training program
FENS Forum 2024
Motor arrest by stimulation of the pedunculopontine nucleus disrupts perception of visual cue in a visuospatial cue task
FENS Forum 2024