Mental Imagery
mental imagery
N/A
Two Postdoctoral Research Associates in Neurorobotics are required for a period of 48 months to work on the Horizon/InnovateUK project “PRIMI: Performance in Robots Interaction via Mental Imagery. This is a collaborative project of the University of Manchester’s Cognitive Robotics Lab with various academic and industry partners in the UK and Europe. PRIMI will synergistically combine research and development in neurophysiology, psychology, machine intelligence, cognitive mechatronics, neuromorphic engineering, and humanoid robotics to build developmental models of higher-cognition abilities – mental imagery, abstract reasoning, and theory of mind – boosted by energy- efficient event-driven computing and sensing. You will carry out research on robot neuro/cognitive architectures, using a combination of machine learning and robotics methodologies. You will be working collaboratively as part of the Cognitive Robotics Lab at the Department of Computer Science at the University of Manchester under the supervision of Professor Angelo Cangelosi.
Angelo Cangelosi
A Postdoctoral Research Associates in Neuromorphic Systems and/or Computational Neuroscience for robotics is required for a period of 3.5 years to work on the Horizon/InnovateUK project “PRIMI: Performance in Robots Interaction via Mental Imagery. This is a collaborative project of the University of Manchester’s Cognitive Robotics Lab with various academic and industry partners in the UK and Europe. PRIMI will synergistically combine research and development in neurophysiology, psychology, machine intelligence, cognitive mechatronics, neuromorphic engineering, and humanoid robotics to build developmental models of higher-cognition abilities – mental imagery, abstract reasoning, and theory of mind – boosted by energy-efficient event-driven computing and sensing. You will carry out research on the design of neuromorphic systems models for robotics. The postdoc will work collaboratively with the other postdocs and PhD students in the PRIMI project. This post requires expertise in computational neuroscience (e.g. spiking neural networks) for robotics and/or neuromorphic systems.
Dr. Amir Aly
This project will develop datasets and software to deliver Functional Imagery Training (FIT) via mobile devices and, ultimately, via embodied agents (robots). As a person-centred intervention, FIT presents an interesting challenge: the practitioner or AI must tailor responses to the content of what the 'client' says, and guide mental imagery exercises based on that personal content. By building an interdisciplinary team of psychologists and machine learning experts this project will deliver real-world impact with broad implications for mental healthcare.
Neuroscience of socioeconomic status and poverty: Is it actionable?
SES neuroscience, using imaging and other methods, has revealed generalizations of interest for population neuroscience and the study of individual differences. But beyond its scientific interest, SES is a topic of societal importance. Does neuroscience offer any useful insights for promoting socioeconomic justice and reducing the harms of poverty? In this talk I will use research from my own lab and others’ to argue that SES neuroscience has the potential to contribute to policy in this area, although its application is premature at present. I will also attempt to forecast the ways in which practical solutions to the problems of poverty may emerge from SES neuroscience. Bio: Martha Farah has conducted groundbreaking research on face and object recognition, visual attention, mental imagery, and semantic memory and - in more recent times - has been at the forefront of interdisciplinary research into neuroscience and society. This deals with topics such as using fMRI for lie detection, ethics of cognitive enhancement, and effects of social deprivation on brain development.
Visualization and manipulation of our perception and imagery by BCI
We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.
NMC4 Short Talk: Sensory intermixing of mental imagery and perception
Several lines of research have demonstrated that internally generated sensory experience - such as during memory, dreaming and mental imagery - activates similar neural representations as externally triggered perception. This overlap raises a fundamental challenge: how is the brain able to keep apart signals reflecting imagination and reality? In a series of online psychophysics experiments combined with computational modelling, we investigated to what extent imagination and perception are confused when the same content is simultaneously imagined and perceived. We found that simultaneous congruent mental imagery consistently led to an increase in perceptual presence responses, and that congruent perceptual presence responses were in turn associated with a more vivid imagery experience. Our findings can be best explained by a simple signal detection model in which imagined and perceived signals are added together. Perceptual reality monitoring can then easily be implemented by evaluating whether this intermixed signal is strong or vivid enough to pass a ‘reality threshold’. Our model suggests that, in contrast to self-generated sensory changes during movement, our brain does not discount self-generated sensory signals during mental imagery. This has profound implications for our understanding of reality monitoring and perception in general.