← Back

Virtual Reality

Topic spotlight
TopicWorld Wide

virtual reality

Discover seminars, jobs, and research tagged with virtual reality across World Wide.
64 curated items32 Seminars21 Positions11 ePosters
Updated 1 day ago
64 items · virtual reality
64 results
Position

Professor Fiona Newell

Multisensory Cognition Lab, Institute of Neuroscience, Trinity College Dublin
Dublin, Ireland
Dec 5, 2025

Applications are invited for the role of Research Assistant at the Institute of Neuroscience in Trinity College (TCIN) to work in the Multisensory Cognition Lab headed by Prof. Fiona Newell. The Multisensory Cognition lab is generally interested in all aspects of human perception based on vision, hearing and touch. The Research Assistant will join a project aimed at investigating object recognition in children and in adults. The research adopts a multidisciplinary approach involving cognitive neuroscience, statistical modelling, psychophysics and computer science, particularly Virtual Reality. The candidate will participate in regular lab and collaborator meetings, learn about diverse methodologies in perceptual science. The position is funded for 1 year with a possibility for continuation for another year. Successful candidates are expected to take up the position immediately, but ideally no later than March 2022. The Research Assistant will join a research team of PhD students, postdoctoral researchers and will have the opportunity to collaborate with colleagues within the Institute of Neuroscience and industrial partners. The group has dedicated laboratory facility equipped with state-of art facilities for behavioural testing, including eye tracking and VR technology (HTC Vive and Oculus). TCIN also houses a research-dedicated MRI scanner, accessible to all principal investigators and their groups. The Research Assistant will be expected to support the administration and management of the project (e.g. Ethical approval, project website, social media, recruitment of participants, setting up data storage protocols etc.). The will also be required to help with the research, including stimulus creation (i.e. collating and building a database of visual, haptic and auditory stimuli for experiments on multisensory perception), participant testing and data collection. The Research Assistant will also be involved in the initial stages of setting up and testing using an eye tracker (Tobii or Eyelink) and VR/AR apparatus (Oculus or HTC Vive) with other team members and collaborators.

Position

Prof Jae-Hyun Jung

Harvard Medical School
Boston, MA
Dec 5, 2025

Postdoc Fellow in virtual reality and mobility studies - Jung Lab (Schepens Eye Research Institute, Harvard Medical School) Schepens Eye Research Institute/Mass. Eye and Ear, Harvard Medical School has an opening for one full-time postdoc fellow to work with Dr. Jae-Hyun Jung (https://scholar.harvard.edu/jaehyun_jung) in the Mobility Enhancement and Vision Rehabilitation Center of Excellence. The position is initially available for one year with the possibility of extension for additional more years. Ph.D. in any area related to visual perception (e.g., vision/neuroscience, computer science, electrical engineering, or optometry) include any of below topics: Motion perception, Mobility simulation, Stereoscopic depth perception, Attention switching, or Contrast/Saliency modeling The successful candidate will make major contributions to a current NIH-funded project evaluating field expansion in mobility and other pilot projects related to AR/VR devices. Proficiency in programming for experiment design, experience with human subject study, and good problem-solving skills are required. Experience with VR/AR devices, Unity/Unreal programming, or experience with people with vision impairment would be a plus. The position is open and available right now. Salary will be according to the NIH scale for postdoctoral fellows. Start date is flexible but ideally as soon as possible. Applications will be reviewed until the position is filled. Applications should include a CV, a letter of interest, and the expected date of availability in PDF. Please email applications to Jae-Hyun Jung (jaehyun_jung@meei.harvard.edu) Schepens Eye Research Institute of Mass. Eye and Ear, Harvard Medical School is located in Boston with a strong research community of faculty, postdoctoral fellows, and research assistants with interdisciplinary backgrounds. The position also provides the opportunity to participate in the Schepens postdoc/research training program for scientific integrity and other general issues of interest to young scientists and also to develop additional collaborations with the research community at the Schepens, which includes multiple Center of Excellence in Harvard Medical School.

Position

Prof Jae-Hyun Jung

Harvard Medical School
Boston, MA
Dec 5, 2025

Postdoc Fellow in virtual reality and mobility studies - Jung Lab (Schepens Eye Research Institute, Harvard Medical School) Schepens Eye Research Institute/Mass. Eye and Ear, Harvard Medical School has an opening for one full-time postdoc fellow to work with Dr. Jae-Hyun Jung (https://scholar.harvard.edu/jaehyun_jung) in the Mobility Enhancement and Vision Rehabilitation Center of Excellence. The position is initially available for one year with the possibility of extension for additional more years. Ph.D. in any area related to visual perception (e.g., vision/neuroscience, computer science, electrical engineering, or optometry) include any of below topics: Motion perception, Mobility simulation, Stereoscopic depth perception, Attention switching, or Contrast/Saliency modeling The successful candidate will make major contributions to a current NIH-funded project evaluating field expansion in mobility and other pilot projects related to AR/VR devices. Proficiency in programming for experiment design, experience with human subject study, and good problem-solving skills are required. Experience with VR/AR devices, Unity/Unreal programming, or experience with people with vision impairment would be a plus. The position is open and available right now. Salary will be according to the NIH scale for postdoctoral fellows. Start date is flexible but ideally as soon as possible. Applications will be reviewed until the position is filled. Applications should include a CV, a letter of interest, and the expected date of availability in PDF. Please email applications to Jae-Hyun Jung (jaehyun_jung@meei.harvard.edu) Schepens Eye Research Institute of Mass. Eye and Ear, Harvard Medical School is located in Boston with a strong research community of faculty, postdoctoral fellows, and research assistants with interdisciplinary backgrounds. The position also provides the opportunity to participate in the Schepens postdoc/research training program for scientific integrity and other general issues of interest to young scientists and also to develop additional collaborations with the research community at the Schepens, which includes multiple Center of Excellence in Harvard Medical School.

Position

Prof Virginie van Wassenhove

CEA, INSERM
Gif sur Yvette (near Paris), France
Dec 5, 2025

** Job application opened until filled ideally by the end of Feb. 2021** Applications are invited for two full-time post-doctoral cognitive neuroscientists in the European consortium “Extended-personal reality: augmented recording and transmission of virtual senses through artificial-intelligence” (see abstract p.2). EXPERIENCE involves eight academic and industrial partners with complementary expertise in artificial intelligence, neuroscience, psychiatry, neuroimaging, MEG/EEG/physiological recording techniques, and virtual-reality. The postdoctoral positions will be fully dedicated to the Scientific foundation for the Extended-Personal Reality, a work package lead by the CEA (Virginie van Wassenhove) in collaboration with Univ. of Pisa (Gaetano Valenza, Mateo Bianchi), Padova (Claudio Gentilli), Roma Tor Vergata (Nicola Toschi) and others… Full information here: https://brainthemind.files.wordpress.com/2021/01/experience_postdoctoral_adds.pdf

Position

Dan Goodman

Imperial College London
London, UK
Dec 5, 2025

We have a research associate (postdoc) position to work on spatial audio processing and spatial hearing using methods from machine learning. The aim of the project is to design a method for interactively fitting individualised filters for spatial audio (HRTFs) to users in real-time based on their interactions with a VR/AR environment. We will use meta-learning algorithms to minimise the time required to individualise the filters, using simulated and real interactions with large databases of synthetic and measured filters. The project has potential to become a very widely used tool in academia and industry, as existing methods for recording individualised filters are often expensive, slow, and not widely available for consumers. The role is initially available for up to 18 months, ideally starting on or soon after 1st January 2022 (although there is flexibility). The role is based in the Neural Reckoning group led by Dan Goodman in the Electrical and Electronic Engineering Department of Imperial College. You will work with other groups at Imperial, as well as with a wider consortium of universities and companies in the SONICOM project (€5.7m EU grant), led by Lorenzo Picinali at Imperial.

Position

Prof Georges Debrégeas

Sorbonne Université
Paris, France
Dec 5, 2025

Motile animals use sensory cues to navigate towards environments where there are more likely to obtain food, find mates or to avoid predators. Sensory-driven navigation relies on a closed-loop mechanism between motor action and motor-induced sensory inputs. At each instant, multiple sensory cues have to be integrated to bias the forthcoming motor command. The student will thoroughly and quantitatively characterize the behavioral algorithm underlying sensory-driven navigation in zebrafish larvae. The animals will be 5-10 days old, as this age is amenable to whole-brain functional imaging. The project will focus on both phototaxis (navigation towards a light source) and thermotaxis (navigation relative to a thermal gradient). Two experimental platforms will be set up. 1. Freely swimming larvae will be video-monitored and submitted to whole-field visual stimuli. The visual stimulation will be locked in real-time on the animal’s orientation and/or position in space. This will allow in particular to separately probe the effect of stereo (difference in illumination between both eyes) and uniform (total illumination on both eyes) visual cues. For thermally-driven navigation, the animal will be allowed to freely explore a large environment in which a constant thermal gradient is imposed.e 2. Experiments will be reproduced in a virtual-reality setting. In this case, the animal is partially restrained in agarose with its tail free. Monitoring the tail movement will provide access to its virtual displacement, on which the visual and/or thermal stimuli will be locked. These behavioral experiments will be analysed in order to describe the animal’s navigation as a sensory-biased random walk. For more information see: https://www.smartnets-etn.eu/behavioral-characterization-of-sensory-driven-nagivation-in-zebrafish-larvae/

PositionComputational Neuroscience

Prof Georges Debrégeas

Sorbonne Université
Paris, France
Dec 5, 2025

Zebrafish larva possesses a combination of assets – small dimensions, brain transparency, genetic tractability – which makes it a unique vertebrate model system to probe brain-scale neuronal dynamics. Using light-sheet microscopy, it is currently possible to monitor the activity of the entire brain at cellular resolution using functional calcium imaging, at about 1 full brain/second. The student will harness this unique opportunity to dissect the neural computation at play during sensory-driven navigation. 5-7 days old larvae will be partially restrained in agarose, i.e. with their tail free. Real-time video-monitoring of the tail beats will be used to infer virtual navigational parameters (displacement, reorientation); visual or thermal stimuli will be delivered to the larvae in a manner that will simulate a realistic navigation along light or thermal gradients. During this virtual sensory-driven navigation, the brain activity will be monitored using two-photon light-sheet functional imaging. These experiments will provide rich datasets of whole-brain activity during a complex sensorimotor task. The network dynamics will be analysed in order to extract a finite number of brain states associated with various motor programs. Starting from spontaneous navigation phases (i.e. absence of varying sensory cues), the student will analyse how different sensory cues interfere with the network endogenous dynamics to bias the probability of these different brain states and eventually favor movements along sensory gradients. For more information see: https://www.smartnets-etn.eu/whole-brain-network-dynamics-in-zebrafish-larvae-during-spontaneous-and-sensory-driven-virtual-navigation/

Position

Prof Iain Couzin

University of Konstanz
Konstanz, Germany
Dec 5, 2025

The application of Virtual Reality (VR) environments allows us to experimentally dissociate social input and responses, opening powerful avenues of inquiry into the dynamics of social influence and the physiological and neural mechanisms of collective behaviour. A key task for the nervous system is to make sense of complex streams of potentially-informative sensory input, allowing appropriate, relatively low-dimensional, motor actions to be taken, sometimes under conditions of considerable time constraint. The student will employ fully immersive ‘holographic’ VR to investigate the behavioural mechanisms by which freely-swimming zebrafish obtain both social and non-social sensory information from their surroundings, and how they use this to inform movement decisions. Immersive VR allows extremely precise control over the appearance, body postural changes, and motion, allowing photorealistic virtual individuals to interact dynamically with unrestrained real animals. Similar to a method that has transformed neuroscience — the dynamic patch clamp paradigm in which inputs to neurons can be based on fast closed-loop measurements of their present behaviour — VR creates the possibility for a ‘dynamic social patch clamp’ paradigm in which we can develop, and interrogate, decision-making models by integrating virtual organisms in the same environment as real individuals. This tool will help us to infer the sensory basis of social influence, the causality of influence in (small) social networks, to provide highly repeatable stimuli (allowing us to evaluate inter-individual and within-individual variation) and to interrogate the feedback loops inherent in social dynamics. For more information see: https://www.smartnets-etn.eu/using-immersive-virtual-reality-vr-to-determine-causal-relationships-in-animal-social-networks/

Position

Jens Peter Lindemann

Neurobiology Group of Bielefeld University
Bielefeld University
Dec 5, 2025

The PhD project is part of the DFG-funded project 'Cue integration by bumblebees during navigation in uncertain environments with multiple goal options: Behavioural analysis in virtual reality and computational modelling' in an international research team. Bumblebees can be trained to prefer certain places or objects in a virtual environment through appropriate rewarding. In a close integration of two PhD projects, one with a focus on VR behaviour experiments and the other focussing on computational modelling and simulation, we are investigating the mechanisms underlying these learning and orientation performances. The applicant is expected to design and implement models for behavioral control of bumblebees, test them in computer simulations, contribute to VR experiments with bumblebees, and collaborate intensively with other project participants.

PositionNeuroscience

Burcu Ayşen Ürgen

Bilkent University
Ankara, Turkey
Dec 5, 2025

Bilkent University invites applications for multiple open-rank faculty positions in the Department of Neuroscience. The department plans to expand research activities in certain focus areas and accordingly seeks applications from promising or established scholars who have worked in the following or related fields: Cellular/molecular/developmental neuroscience with a strong emphasis on research involving animal models. Systems/cognitive/computational neuroscience with a strong emphasis on research involving emerging data-driven approaches, including artificial intelligence, robotics, brain-machine interfaces, virtual reality, computational imaging, and theoretical modeling. Candidates with a research focus in those areas whose research has a neuroimaging component are particularly encouraged to apply. The Department’s interdisciplinary Graduate Program in Neuroscience that offers Master's and PhD degrees was established in 2014. The department is affiliated with Bilkent’s Aysel Sabuncu Brain Research Center (ASBAM) and the National Magnetic Resonance Research Center (UMRAM). Faculty affiliated with the department has the privilege to access state-of-the-art research facilities in these centers, including animal facilities, cellular/molecular laboratory infrastructure, psychophysics laboratories, eyetracking laboratories, EEG laboratories, a human-robot interaction laboratory, and two MRI scanners (3T and 1.5T).

PositionComputer Science

N/A

University of Innsbruck
University of Innsbruck, Austria
Dec 5, 2025

The position integrates into an attractive environment of existing activities in artificial intelligence such as machine learning for robotics and computer vision, natural language processing, recommender systems, schedulers, virtual and augmented reality, and digital forensics. The candidate should engage in research and teaching in the general area of artificial intelligence. Examples of possible foci include machine learning for pattern recognition, prediction and decision making, data-driven, adaptive, learning and self-optimizing systems, explainable and transparent AI, representation learning; generative models, neuro-symbolic AI, causality, distributed/decentralized learning, environmentally-friendly, sustainable, data-efficient, privacy-preserving AI, neuromorphic computing and hardware aspects, knowledge representations, reasoning, ontologies. Cooperations with research groups at the Department of Computer Science, the Research Areas and in particular the Digital Science Center of the University as well as with business, industry and international research institutions are expected. The candidate should reinforce or complement existing strengths of the Department of Computer Science.

Position

Brandon (Brad) Minnery

Kairos Research LLC
Dayton, Ohio
Dec 5, 2025

We currently have an opening for a full-time Senior Human-Computer Interaction Researcher whose work seeks to incorporate recent advances in generative large language models (LLMs). Specific research areas of interest include human-machine dialogue, human-AI alignment, trust (and over-trust) in AI, and the use of multimodal generative AI approaches in conjunction with other tools and techniques (e.g., virtual and/or augmented reality) to accelerate learning in real-world task environments. Additional related projects underway at Kairos involve the integration of generative AI into interactive dashboards for visualizing and interrogating social media narratives. The Human-Computer Interaction Researcher will play a significant role in supporting our growing body of work with DARPA, Special Operations Command, the Air Force Research Laboratory, and other federal sponsors.

Position

Dr. Roman Rosipal

Slovak Academy of Sciences
Slovak Academy of Sciences, Dubravska cesta 9, 841 04 Bratislava, Slovak Republic
Dec 5, 2025

We seek a PhD candidate to undertake research in the domain of Brain-Computer Interface and Virtual Reality for post-stroke rehabilitation, representing the exciting intersection of computational neuroscience, applied informatics, and artificial intelligence. The position is open within the European Doctoral Network for Neural Prostheses and Brain Research (DONUT) project at the Slovak Academy of Sciences in Bratislava, Slovakia, and supervised by Dr. Roman Rosipal.

Position

Louis Marti

Kairos Research
Dayton, Ohio
Dec 5, 2025

We currently have an opening for a full-time Senior Human-Computer Interaction Researcher whose work seeks to incorporate recent advances in generative large language models (LLMs). Specific research areas of interest include human-machine dialogue, human-AI alignment, trust (and over-trust) in AI, and the use of multimodal generative AI approaches in conjunction with other tools and techniques (e.g., virtual and/or augmented reality) to accelerate learning in real-world task environments. Additional related projects underway at Kairos involve the integration of generative AI into interactive dashboards for visualizing and interrogating social media narratives. The Human-Computer Interaction Researcher will play a significant role in supporting our growing body of work with DARPA, Special Operations Command, the Air Force Research Laboratory, and other federal sponsors.

Position

Prof. Thomas Wolbers

German Center for Neurodegenerative Diseases
Magdeburg, Germany
Dec 5, 2025

You will engage in a comprehensive investigation of the neural and cognitive processes underlying superior memory in aging. This research will involve: - Designing and implementing behavioral experiments using advanced virtual reality (VR) technologies - Conducting neuroimaging experiments using molecular imaging techniques and ultra-high field MRI (7T) - Applying computational models to analyze data and generate predictive insights This project is positioned at the intersection of aging research, advanced neuroimaging and computational neuroscience, allowing you to contribute to an area of high societal relevance. For more details, please visit https://jobs.dzne.de/en/jobs/101384/phd-fmx-position-on-memory-and-spatial-coding-in-superagers-406620249

Position

Dominik R Bach

University of Bonn
Bonn, Germany
Dec 5, 2025

We are looking to hire a highly motivated and driven postdoctoral researcher to understand human cooperation & competition using virtual reality. This ambitious project combines concepts from behavioural game theory and theory of mind in an existing VR setup, and is supported by a dedicated VR developer. The goal of the position is to understand human cooperation in dangerous situations. The role includes conceptual design of classical game-theoretic dilemmata in naturalistic VR scenarios with experimentally controlled non-verbal information channels, conducting and analysing experiments using motion capture data and an established R package (https://github.com/bachlab/vrthreat), and publication of research and development results.

Position

Prof. Dominik R Bach

University of Bonn
Bonn, Germany
Dec 5, 2025

The Hertz Chair for Artificial Intelligence and Neuroscience at University of Bonn is looking to recruit a postdoctoral fellow or PhD student to undertake high quality research and produce high-impact publications in a collaborative research project investigating human escape using wearable magnetoencephalography with optically pumped magnometers (OPM). The goal of the advertised position is to understand the neural control of human escape decisions in an immersive virtual reality (VR) environment using an OPM-compatible HMD, in collaboration with the Wellcome Platform for Naturalistic Neuroimaging, which is part of the FIL at the UCL Queen Square Institute of Neurology, London, UK. The role includes conceptual design of naturalistic VR scenarios that allow MEG recordings, planning, conducting, and analysing MEG experiments, building robust pipelines for MEG analysis in naturalistic settings, and publication of research and development results.

Position

Prof. Dominik R Bach

University of Bonn
Bonn, Germany
Dec 5, 2025

The Hertz Chair for Artificial Intelligence and Neuroscience at University of Bonn is looking to recruit a postdoctoral fellow or PhD student to undertake high quality research and produce high-impact publications in a collaborative research project investigating human escape using wearable magnetoencephalography with optically pumped magnometers (OPM). The goal of the advertised position is to understand the neural control of human escape decisions in an immersive virtual reality (VR) environment using an OPM-compatible HMD, in collaboration with the Wellcome Platform for Naturalistic Neuroimaging, which is part of the FIL at the UCL Queen Square Institute of Neurology, London, UK. The role includes conceptual design of naturalistic VR scenarios that allow MEG recordings, planning, conducting, and analysing MEG experiments, building robust pipelines for MEG analysis in naturalistic settings, and publication of research and development results.

SeminarNeuroscienceRecording

Multisensory perception in the metaverse

Polly Dalton
Royal Holloway, University of London
May 7, 2025
SeminarNeuroscience

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine

Nelson Spruston
Janelia, Ashburn, USA
Mar 5, 2024

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

SeminarPsychology

Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making

Izzy Wisher
Aarhus University
Feb 25, 2024

How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.

SeminarNeuroscienceRecording

Visual-vestibular cue comparison for perception of environmental stationarity

Paul MacNeilage
University of Nevada, Reno
Oct 25, 2023

Note the later time!

SeminarNeuroscience

The Geometry of Decision-Making

Iain Couzin
University of Konstanz, Germany
May 23, 2023

Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 21, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscience

A specialized role for entorhinal attractor dynamics in combining path integration and landmarks during navigation

Malcolm Campbell
Harvard
Mar 8, 2023

During navigation, animals estimate their position using path integration and landmarks. In a series of two studies, we used virtual reality and electrophysiology to dissect how these inputs combine to generate the brain’s spatial representations. In the first study (Campbell et al., 2018), we focused on the medial entorhinal cortex (MEC) and its set of navigationally-relevant cell types, including grid cells, border cells, and speed cells. We discovered that attractor dynamics could explain an array of initially puzzling MEC responses to virtual reality manipulations. This theoretical framework successfully predicted both MEC grid cell responses to additional virtual reality manipulations, as well as mouse behavior in a virtual path integration task. In the second study (Campbell*, Attinger* et al., 2021), we asked whether these principles generalize to other navigationally-relevant brain regions. We used Neuropixels probes to record thousands of neurons from MEC, primary visual cortex (V1), and retrosplenial cortex (RSC). In contrast to the prevailing view that “everything is everywhere all at once,” we identified a unique population of MEC neurons, overlapping with grid cells, that became active with striking spatial periodicity while head-fixed mice ran on a treadmill in darkness. These neurons exhibited unique cue-integration properties compared to other MEC, V1, or RSC neurons: they remapped more readily in response to conflicts between path integration and landmarks; they coded position prospectively as opposed to retrospectively; they upweighted path integration relative to landmarks in conditions of low visual contrast; and as a population, they exhibited a lower-dimensional activity structure. Based on these results, our current view is that MEC attractor dynamics play a privileged role in resolving conflicts between path integration and landmarks during navigation. Future work should include carefully designed causal manipulations to rigorously test this idea, and expand the theoretical framework to incorporate notions of uncertainty and optimality.

SeminarNeuroscienceRecording

Does subjective time interact with the heart rate?

Saeedeh Sadegh
Cornell University, New York
Jan 24, 2023

Decades of research have investigated the relationship between perception of time and heart rate with often mixed results. In search of such a relationship, I will present my far journey between two projects: from time perception in the realistic VR experience of crowded subway trips in the order of minutes (project 1); to the perceived duration of sub-second white noise tones (project 2). Heart rate had multiple concurrent relationships with subjective temporal distortions for the sub-second tones, while the effects were lacking or weak for the supra-minute subway trips. What does the heart have to do with sub-second time perception? We addressed this question with a cardiac drift-diffusion model, demonstrating the sensory accumulation of temporal evidence as a function of heart rate.

SeminarNeuroscienceRecording

Neurocognitive mechanisms of enhanced implicit temporal processing in action video game players

Francois R. Foerster
Giersch Lab, INSERM U1114
Feb 22, 2022

Playing action video games involves both explicit (conscious) and implicit (non-conscious) expectations of timed events, such as the appearance of foes. While studies revealed that explicit attention skills are improved in action video game players (VGPs), their implicit skills remained untested. To this end, we investigated explicit and implicit temporal processing in VGPs and non-VGPs (control participants). In our variable foreperiod task, participants were immersed in a virtual reality and instructed to respond to a visual target appearing at variable delays after a cue. I will present behavioral, oculomotor and EEG data and discuss possible markers of the implicit passage of time and explicit temporal attention processing. All evidence indicates that VGPs have enhanced implicit skills to track the passage of time, which does not require conscious attention. Thus, action video game play may improve a temporal processing found altered in psychopathologies, such as schizophrenia. Could digital (game-based) interventions help remediate temporal processing deficits in psychiatric populations?

SeminarNeuroscienceRecording

The effect of gravity on the perception of distance and self-motion: a multisensory perspective

Laurence Harris
Centre for Vision Research, York University, Toronto
Feb 9, 2022

Gravity is a constant in our lives. It provides an internalized reference to which all other perceptions are related. We can experimentally manipulate the relationship between physical gravity with other cues to the direction of “up” using virtual reality - with either HMDs or specially built tilting environments - to explore how gravity contributes to perceptual judgements. The effect of gravity can also be cancelled by running experiments on the International Space Station in low Earth orbit. Changing orientation relative to gravity - or even just perceived orientation – affects your perception of how far away things are (they appear closer when supine or prone). Cancelling gravity altogether has a similar effect. Changing orientation also affects how much visual motion is needed to perceive a particular travel distance (you need less when supine or prone). Adapting to zero gravity has the opposite effect (you need more). These results will be discussed in terms of their practical consequences and the multisensory processes involved, in particular the response to visual-vestibular conflict.

SeminarNeuroscience

From natural scene statistics to multisensory integration: experiments, models and applications

Cesare Parise
Oculus VR
Feb 8, 2022

To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.

SeminarNeuroscience

Online "From Bench to Bedside" Neurosciences Symposium

Anissa Kempf (BZ), Prof. Urs Fischer (USB)
Feb 3, 2022

2 Keynote lectures :“Homeostatic control of sleep in the fly"and “Management of Intracerebral Haemorrhage – where is the evidence?” and 2 sessions: "Cortical top-down information processing” and “Virtual/augmented reality and its implications for the clinic”

SeminarNeuroscienceRecording

Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex

Alexander Attinger
Giocomo lab, Stanford University
Jan 11, 2022

During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.

SeminarNeuroscienceRecording

Deforming the metric of cognitive maps distorts memory

Jacob Bellmund
Doeller lab, MPI CBS and the Kavli Institute
Jan 11, 2022

Environmental boundaries anchor cognitive maps that support memory. However, trapezoidal boundary geometry distorts the regular firing patterns of entorhinal grid cells proposedly providing a metric for cognitive maps. Here, we test the impact of trapezoidal boundary geometry on human spatial memory using immersive virtual reality. Consistent with reduced regularity of grid patterns in rodents and a grid-cell model based on the eigenvectors of the successor representation, human positional memory was degraded in a trapezoid compared to a square environment; an effect particularly pronounced in the trapezoid’s narrow part. Congruent with spatial frequency changes of eigenvector grid patterns, distance estimates between remembered positions were persistently biased; revealing distorted memory maps that explained behavior better than the objective maps. Our findings demonstrate that environmental geometry affects human spatial memory similarly to rodent grid cell activity — thus strengthening the putative link between grid cells and behavior along with their cognitive functions beyond navigation.

SeminarNeuroscience

Body Representation in Virtual Reality

Mel Slater
Universitat de Barcelona
Jan 11, 2022

How the brain represents the body is a fundamental question in cognitive neuroscience. Experimental studies are difficult because ‘the body is always there’ (William James). In recent years immersive virtual reality techniques have been introduced that deliver apparent changes to the body extending earlier techniques such as the rubber hand illusion, or substituting the whole body by a virtual one visually collocated with the real body, and seen from a normal first person perspective. This talk will introduce these techniques, and concentrate on how changing the body can change the mind and behaviour, especially in the context of combatting aggression based on gender or race.

SeminarNeuroscienceRecording

NMC4 Short Talk: Neurocomputational mechanisms of causal inference during multisensory processing in the macaque brain

Guangyao Qi
Institute of Neuroscience, Chinese Academy of Sciences
Dec 2, 2021

Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.

SeminarNeuroscienceRecording

NMC4 Short Talk: Novel population of synchronously active pyramidal cells in hippocampal area CA1

Dori Grijseels (they/them)
University of Sussex
Dec 1, 2021

Hippocampal pyramidal cells have been widely studied during locomotion, when theta oscillations are present, and during short wave ripples at rest, when replay takes place. However, we find a subset of pyramidal cells that are preferably active during rest, in the absence of theta oscillations and short wave ripples. We recorded these cells using two-photon imaging in dorsal CA1 of the hippocampus of mice, during a virtual reality object location recognition task. During locomotion, the cells show a similar level of activity as control cells, but their activity increases during rest, when this population of cells shows highly synchronous, oscillatory activity at a low frequency (0.1-0.4 Hz). In addition, during both locomotion and rest these cells show place coding, suggesting they may play a role in maintaining a representation of the current location, even when the animal is not moving. We performed simultaneous electrophysiological and calcium recordings, which showed a higher correlation of activity between the LFO and the hippocampal cells in the 0.1-0.4 Hz low frequency band during rest than during locomotion. However, the relationship between the LFO and calcium signals varied between electrodes, suggesting a localized effect. We used the Allen Brain Observatory Neuropixels Visual Coding dataset to further explore this. These data revealed localised low frequency oscillations in CA1 and DG during rest. Overall, we show a novel population of hippocampal cells, and a novel oscillatory band of activity in hippocampus during rest.

SeminarNeuroscienceRecording

The Geometry of Decision-Making

Iain Couzin
Max Planck Institute of Animal Behavior & University of Konstanz
Oct 7, 2021

Choosing among spatially distributed options is a central challenge for animals, from deciding among alternative potential food sources or refuges, to choosing with whom to associate. Here, using an integrated theoretical and experimental approach (employing immersive Virtual Reality), with both invertebrate and vertebrate models—the fruit fly, desert locust and zebrafish—we consider the recursive interplay between movement and collective vectorial integration in the brain during decision-making regarding options (potential ‘targets’) in space. We reveal that the brain repeatedly breaks multi-choice decisions into a series of abrupt (critical) binary decisions in space-time where organisms switch, spontaneously, from averaging vectorial information among, to suddenly excluding one of, the remaining options. This bifurcation process repeats until only one option—the one ultimately selected—remains. Close to each bifurcation the ‘susceptibility’ of the system exhibits a sharp increase, inevitably causing small differences among the remaining options to become amplified; a property that both comes ‘for free’ and is highly desirable for decision-making. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.

SeminarOpen SourceRecording

Creating and controlling visual environments using BonVision

Aman Saleem
University College London
Sep 14, 2021

Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices.

SeminarOpen SourceRecording

PiVR: An affordable and versatile closed-loop platform to study unrestrained sensorimotor behavior

David Tadres and Matthieu Louis
University of California, Santa Barbara
Sep 2, 2021

PiVR is a system that allows experimenters to immerse small animals into virtual realities. The system tracks the position of the animal and presents light stimulation according to predefined rules, thus creating a virtual landscape in which the animal can behave. By using optogenetics, we have used PiVR to present fruit fly larvae with virtual olfactory realities, adult fruit flies with a virtual gustatory reality and zebrafish larvae with a virtual light gradient. PiVR operates at high temporal resolution (70Hz) with low latencies (<30 milliseconds) while being affordable (<US$500) and easy to build (<6 hours). Through extensive documentation (www.PiVR.org), this tool was designed to be accessible to a wide public, from high school students to professional researchers studying systems neuroscience in academia.

SeminarNeuroscience

Neural circuits that support robust and flexible navigation in dynamic naturalistic environments

Hannah Haberkern
HHMI Janelia Research Campus
Aug 15, 2021

Tracking heading within an environment is a fundamental requirement for flexible, goal-directed navigation. In insects, a head-direction representation that guides the animal’s movements is maintained in a conserved brain region called the central complex. Two-photon calcium imaging of genetically targeted neural populations in the central complex of tethered fruit flies behaving in virtual reality (VR) environments has shown that the head-direction representation is updated based on self-motion cues and external sensory information, such as visual features and wind direction. Thus far, the head direction representation has mainly been studied in VR settings that only give flies control of the angular rotation of simple sensory cues. How the fly’s head direction circuitry enables the animal to navigate in dynamic, immersive and naturalistic environments is largely unexplored. I have developed a novel setup that permits imaging in complex VR environments that also accommodate flies’ translational movements. I have previously demonstrated that flies perform visually-guided navigation in such an immersive VR setting, and also that they learn to associate aversive optogenetically-generated heat stimuli with specific visual landmarks. A stable head direction representation is likely necessary to support such behaviors, but the underlying neural mechanisms are unclear. Based on a connectomic analysis of the central complex, I identified likely circuit mechanisms for prioritizing and combining different sensory cues to generate a stable head direction representation in complex, multimodal environments. I am now testing these predictions using calcium imaging in genetically targeted cell types in flies performing 2D navigation in immersive VR.

SeminarNeuroscience

From real problems to beast machines: the somatic basis of selfhood

Anil Seth
University of Sussex
Jun 29, 2021

At the foundation of human conscious experience lie basic embodied experiences of selfhood – experiences of simply ‘being alive’. In this talk, I will make the case that this central feature of human existence is underpinned by predictive regulation of the interior of the body, using the framework of predictive processing, or active inference. I start by showing how conscious experiences of the world around us can be understood in terms of perceptual predictions, drawing on examples from psychophysics and virtual reality. Then, turning the lens inwards, we will see how the experience of being an ‘embodied self’ rests on control-oriented predictive (allostatic) regulation of the body’s physiological condition. This approach implies a deep connection between mind and life, and provides a new way to understand the subjective nature of consciousness as emerging from systems that care intrinsically about their own existence. Contrary to the old doctrine of Descartes, we are conscious because we are beast machines.

SeminarNeuroscience

The effect of gravity on the perception of distance and self-motion

Laurence Harris
Centre for Vision Research, York University, Toronto, Canada
Apr 18, 2021

Gravity is a constant in our lives. It provides an internalized reference to which all other perceptions are related. We can experimentally manipulate the relationship between physical gravity with other cues to the direction of “up” using virtual reality - with either HMDs or specially built tilting environments - to explore how gravity contributes to perceptual judgements. The effect of gravity can also be cancelled by running experiments on the International Space Station in low Earth orbit. Changing orientation relative to gravity - or even just perceived orientation – affects your perception of how far away things are (they appear closer when supine or prone). Cancelling gravity altogether has a similar effect. Changing orientation also affects how much visual motion is needed to perceive a particular travel distance (you need less when supine or prone). Adapting to zero gravity has the opposite effect (you need more). These results will be discussed in terms of their practical consequences and the multisensory processes involved, in particular the response to visual-vestibular conflict.

SeminarNeuroscienceRecording

Cortical networks for flexible decisions during spatial navigation

Christopher Harvey
Harvard University
Feb 18, 2021

My lab seeks to understand how the mammalian brain performs the computations that underlie cognitive functions, including decision-making, short-term memory, and spatial navigation, at the level of the building blocks of the nervous system, cell types and neural populations organized into circuits. We have developed methods to measure, manipulate, and analyze neural circuits across various spatial and temporal scales, including technology for virtual reality, optical imaging, optogenetics, intracellular electrophysiology, molecular sensors, and computational modeling. I will present recent work that uses large scale calcium imaging to reveal the functional organization of the mouse posterior cortex for flexible decision-making during spatial navigation in virtual reality. I will also discuss work that uses optogenetics and calcium imaging during a variety of decision-making tasks to highlight how cognitive experience and context greatly alter the cortical circuits necessary for navigation decisions.

SeminarNeuroscience

From oscillations to laminar responses - characterising the neural circuitry of autobiographical memories

Eleanor Maguire
Wellcome Centre for Human Neuroimaging at UCL
Nov 30, 2020

Autobiographical memories are the ghosts of our past. Through them we visit places long departed, see faces once familiar, and hear voices now silent. These, often decades-old, personal experiences can be recalled on a whim or come unbidden into our everyday consciousness. Autobiographical memories are crucial to cognition because they facilitate almost everything we do, endow us with a sense of self and underwrite our capacity for autonomy. They are often compromised by common neurological and psychiatric pathologies with devastating effects. Despite autobiographical memories being central to everyday mental life, there is no agreed model of autobiographical memory retrieval, and we lack an understanding of the neural mechanisms involved. This precludes principled interventions to manage or alleviate memory deficits, and to test the efficacy of treatment regimens. This knowledge gap exists because autobiographical memories are challenging to study – they are immersive, multi-faceted, multi-modal, can stretch over long timescales and are grounded in the real world. One missing piece of the puzzle concerns the millisecond neural dynamics of autobiographical memory retrieval. Surprisingly, there are very few magnetoencephalography (MEG) studies examining such recall, despite the important insights this could offer into the activity and interactions of key brain regions such as the hippocampus and ventromedial prefrontal cortex. In this talk I will describe a series of MEG studies aimed at uncovering the neural circuitry underpinning the recollection of autobiographical memories, and how this changes as memories age. I will end by describing our progress on leveraging an exciting new technology – optically pumped MEG (OP-MEG) which, when combined with virtual reality, offers the opportunity to examine millisecond neural responses from the whole brain, including deep structures, while participants move within a virtual environment, with the attendant head motion and vestibular inputs.

SeminarNeuroscience

Experience dependent changes of sensory representation in the olfactory cortex

Antonia Marin Burgin
Biomedicine Research Institute of Buenos Aires
Nov 17, 2020

Sensory representations are typically thought as neuronal activity patterns that encode physical attributes of the outside world. However, increasing evidence is showing that as animals learned the association between a sensory stimulus and its behavioral relevance, stimulus representation in sensory cortical areas can change. In this seminar I will present recent experiments from our lab showing that the activity in the olfactory piriform cortex (PC) of mice encodes not only odor information, but also non-olfactory variables associated with the behavioral task. By developing an associative olfactory learning task, in which animals learn to associate a particular context with an odor and a reward, we were able to record the activity of multiple neurons as the animal runs in a virtual reality corridor. By analyzing the population activity dynamics using Principal Components Analysis, we find different population trajectories evolving through time that can discriminate aspects of different trial types. By using Generalized Linear Models we further dissected the contribution of different sensory and non-sensory variables to the modulation of PC activity. Interestingly, the experiments show that variables related to both sensory and non-sensory aspects of the task (e.g., odor, context, reward, licking, sniffing rate and running speed) differently modulate PC activity, suggesting that the PC adapt odor processing depending on experience and behavior.

ePoster

Analysis of gaze control neuronal circuits combining behavioural experiments with a novel virtual reality platform

Carmen Núñez-González, Marta Barandela, Cecilia Jiménez-López, Abraham Segade, Juan Pérez-Fernández

FENS Forum 2024

ePoster

Comparison of acetylcholine release in the mouse cerebral cortex in response to standard visual stimuli vs dynamic virtual reality environment

Julie Azrak, Hossein Sedighi, Jose Daniel Tirado Ramirez, Yulong Li, Elvire Vaucher

FENS Forum 2024

ePoster

Effectiveness of action observation treatment integrated with virtual reality in the motor rehabilitation of stroke patients: A randomized controlled clinical trial

Antonino Errante, Donatella Saviola, Matteo Cantoni, Katia Iannuzzelli, Settimio Ziccarelli, Fabrizio Togni, Marcello Simonini, Carolina Malchiodi, Debora Bertoni, Maria Grazia Inzaghi, Francesca Bozzetti, Annamaria Quarenghi, Paola Quarenghi, Daniele Bosone, Leonardo Fogassi, Giovanni Pietro Salvi, Antonio De Tanti

FENS Forum 2024

ePoster

Feasibility and compatibility of combining virtual reality and transcranial magnetic stimulation

Franka Arden, Phil Henneken, Andreas Vlachos

FENS Forum 2024

ePoster

Hippocampal place field formation by sparse, local learning of visual features in virtual reality

Olivier Ulrich, Lorenzo Posani, Attila Losonczy, Stefano Fusi, James Priestley

FENS Forum 2024

ePoster

The impact of virtual reality on postoperative cognitive impairment and pain perception after surgery

Sebastian Isac, Andrada-Georgiana Badea, Ana-Maria Zagrean, Elisabeta Nita, Diana Irene Mihai, Damiana Ojog, Pavel Bogdan, Teodora Isac, Gabriela Droc

FENS Forum 2024

ePoster

Modulation of brain activity by environmental design: A study using EEG and virtual reality

Jesus S. Garcia Salinas, Anna Wroblewska, Katarzyna Zielonko-Jung, Michał Kucewicz

FENS Forum 2024

ePoster

Multimodal activity of mouse auditory cortex during audio-visual-motor virtual reality

Alessandro La Chioma, David Schneider

FENS Forum 2024

ePoster

Virtual reality empowered deep learning analysis of brain cells

Doris Kaltenecker, Rami Al-Maskari, Moritz Negwer, Luciano Hoeher, Kofler Florian, Shan Zhao, Mihail Todorov, Zhouyi Rong, Johannes Christian Paetzold, Benedikt Wiestler, Marie Piraud, Daniel Rueckert, Julia Geppert, Pauline Morigny, Maria Rohm, Bjoern H. Menze, Stephan Herzig, Mauricio Berriel Diaz, Ali Ertürk

FENS Forum 2024

ePoster

Visual feedback manipulation in virtual reality alters movement-evoked pain perception in chronic low back pain

Jaime Jordán López, María D. Arguisuelas, Julio Doménech, María L. Peñalver-Barrios, Marta Miragall, Rocío Herrero, Rosa M. Baños, Juan J. Amer-Cuenca, Juan F. Lisón

FENS Forum 2024

ePoster

‘What a Mistake!’: Prediction error modulates explicit and visuomotor predictions in virtual reality

Yonatan Stern

Neuromatch 5