← Back

Augmented Reality

Topic spotlight
TopicWorld Wide

augmented reality

Discover seminars, jobs, and research tagged with augmented reality across World Wide.
10 curated items5 Positions5 Seminars
Updated 1 day ago
10 items · augmented reality
10 results
Position

Professor Fiona Newell

Multisensory Cognition Lab, Institute of Neuroscience, Trinity College Dublin
Dublin, Ireland
Dec 5, 2025

Applications are invited for the role of Research Assistant at the Institute of Neuroscience in Trinity College (TCIN) to work in the Multisensory Cognition Lab headed by Prof. Fiona Newell. The Multisensory Cognition lab is generally interested in all aspects of human perception based on vision, hearing and touch. The Research Assistant will join a project aimed at investigating object recognition in children and in adults. The research adopts a multidisciplinary approach involving cognitive neuroscience, statistical modelling, psychophysics and computer science, particularly Virtual Reality. The candidate will participate in regular lab and collaborator meetings, learn about diverse methodologies in perceptual science. The position is funded for 1 year with a possibility for continuation for another year. Successful candidates are expected to take up the position immediately, but ideally no later than March 2022. The Research Assistant will join a research team of PhD students, postdoctoral researchers and will have the opportunity to collaborate with colleagues within the Institute of Neuroscience and industrial partners. The group has dedicated laboratory facility equipped with state-of art facilities for behavioural testing, including eye tracking and VR technology (HTC Vive and Oculus). TCIN also houses a research-dedicated MRI scanner, accessible to all principal investigators and their groups. The Research Assistant will be expected to support the administration and management of the project (e.g. Ethical approval, project website, social media, recruitment of participants, setting up data storage protocols etc.). The will also be required to help with the research, including stimulus creation (i.e. collating and building a database of visual, haptic and auditory stimuli for experiments on multisensory perception), participant testing and data collection. The Research Assistant will also be involved in the initial stages of setting up and testing using an eye tracker (Tobii or Eyelink) and VR/AR apparatus (Oculus or HTC Vive) with other team members and collaborators.

Position

Dan Goodman

Imperial College London
London, UK
Dec 5, 2025

We have a research associate (postdoc) position to work on spatial audio processing and spatial hearing using methods from machine learning. The aim of the project is to design a method for interactively fitting individualised filters for spatial audio (HRTFs) to users in real-time based on their interactions with a VR/AR environment. We will use meta-learning algorithms to minimise the time required to individualise the filters, using simulated and real interactions with large databases of synthetic and measured filters. The project has potential to become a very widely used tool in academia and industry, as existing methods for recording individualised filters are often expensive, slow, and not widely available for consumers. The role is initially available for up to 18 months, ideally starting on or soon after 1st January 2022 (although there is flexibility). The role is based in the Neural Reckoning group led by Dan Goodman in the Electrical and Electronic Engineering Department of Imperial College. You will work with other groups at Imperial, as well as with a wider consortium of universities and companies in the SONICOM project (€5.7m EU grant), led by Lorenzo Picinali at Imperial.

PositionComputer Science

N/A

University of Innsbruck
University of Innsbruck, Austria
Dec 5, 2025

The position integrates into an attractive environment of existing activities in artificial intelligence such as machine learning for robotics and computer vision, natural language processing, recommender systems, schedulers, virtual and augmented reality, and digital forensics. The candidate should engage in research and teaching in the general area of artificial intelligence. Examples of possible foci include machine learning for pattern recognition, prediction and decision making, data-driven, adaptive, learning and self-optimizing systems, explainable and transparent AI, representation learning; generative models, neuro-symbolic AI, causality, distributed/decentralized learning, environmentally-friendly, sustainable, data-efficient, privacy-preserving AI, neuromorphic computing and hardware aspects, knowledge representations, reasoning, ontologies. Cooperations with research groups at the Department of Computer Science, the Research Areas and in particular the Digital Science Center of the University as well as with business, industry and international research institutions are expected. The candidate should reinforce or complement existing strengths of the Department of Computer Science.

Position

Brandon (Brad) Minnery

Kairos Research LLC
Dayton, Ohio
Dec 5, 2025

We currently have an opening for a full-time Senior Human-Computer Interaction Researcher whose work seeks to incorporate recent advances in generative large language models (LLMs). Specific research areas of interest include human-machine dialogue, human-AI alignment, trust (and over-trust) in AI, and the use of multimodal generative AI approaches in conjunction with other tools and techniques (e.g., virtual and/or augmented reality) to accelerate learning in real-world task environments. Additional related projects underway at Kairos involve the integration of generative AI into interactive dashboards for visualizing and interrogating social media narratives. The Human-Computer Interaction Researcher will play a significant role in supporting our growing body of work with DARPA, Special Operations Command, the Air Force Research Laboratory, and other federal sponsors.

Position

Louis Marti

Kairos Research
Dayton, Ohio
Dec 5, 2025

We currently have an opening for a full-time Senior Human-Computer Interaction Researcher whose work seeks to incorporate recent advances in generative large language models (LLMs). Specific research areas of interest include human-machine dialogue, human-AI alignment, trust (and over-trust) in AI, and the use of multimodal generative AI approaches in conjunction with other tools and techniques (e.g., virtual and/or augmented reality) to accelerate learning in real-world task environments. Additional related projects underway at Kairos involve the integration of generative AI into interactive dashboards for visualizing and interrogating social media narratives. The Human-Computer Interaction Researcher will play a significant role in supporting our growing body of work with DARPA, Special Operations Command, the Air Force Research Laboratory, and other federal sponsors.

SeminarNeuroscienceRecording

Multisensory perception in the metaverse

Polly Dalton
Royal Holloway, University of London
May 7, 2025
SeminarNeuroscienceRecording

Seeing with technology: Exchanging the senses with sensory substitution and augmentation

Michael Proulx
University of Bath
Sep 29, 2021

What is perception? Our sensory modalities transmit information about the external world into electrochemical signals that somehow give rise to our conscious experience of our environment. Normally there is too much information to be processed in any given moment, and the mechanisms of attention focus the limited resources of the mind to some information at the expense of others. My research has advanced from first examining visual perception and attention to now examine how multisensory processing contributes to perception and cognition. There are fundamental constraints on how much information can be processed by the different senses on their own and in combination. Here I will explore information processing from the perspective of sensory substitution and augmentation, and how "seeing" with the ears and tongue can advance fundamental and translational research.

SeminarOpen SourceRecording

Creating and controlling visual environments using BonVision

Aman Saleem
University College London
Sep 14, 2021

Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices.