Imagery
imagery
Imagining and seeing: two faces of prosopagnosia
Neuroscience of socioeconomic status and poverty: Is it actionable?
SES neuroscience, using imaging and other methods, has revealed generalizations of interest for population neuroscience and the study of individual differences. But beyond its scientific interest, SES is a topic of societal importance. Does neuroscience offer any useful insights for promoting socioeconomic justice and reducing the harms of poverty? In this talk I will use research from my own lab and others’ to argue that SES neuroscience has the potential to contribute to policy in this area, although its application is premature at present. I will also attempt to forecast the ways in which practical solutions to the problems of poverty may emerge from SES neuroscience. Bio: Martha Farah has conducted groundbreaking research on face and object recognition, visual attention, mental imagery, and semantic memory and - in more recent times - has been at the forefront of interdisciplinary research into neuroscience and society. This deals with topics such as using fMRI for lie detection, ethics of cognitive enhancement, and effects of social deprivation on brain development.
Visualization and manipulation of our perception and imagery by BCI
We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.
Fantastic windows of sensitivity and where to find them
NMC4 Short Talk: Sensory intermixing of mental imagery and perception
Several lines of research have demonstrated that internally generated sensory experience - such as during memory, dreaming and mental imagery - activates similar neural representations as externally triggered perception. This overlap raises a fundamental challenge: how is the brain able to keep apart signals reflecting imagination and reality? In a series of online psychophysics experiments combined with computational modelling, we investigated to what extent imagination and perception are confused when the same content is simultaneously imagined and perceived. We found that simultaneous congruent mental imagery consistently led to an increase in perceptual presence responses, and that congruent perceptual presence responses were in turn associated with a more vivid imagery experience. Our findings can be best explained by a simple signal detection model in which imagined and perceived signals are added together. Perceptual reality monitoring can then easily be implemented by evaluating whether this intermixed signal is strong or vivid enough to pass a ‘reality threshold’. Our model suggests that, in contrast to self-generated sensory changes during movement, our brain does not discount self-generated sensory signals during mental imagery. This has profound implications for our understanding of reality monitoring and perception in general.
NMC4 Keynote:
The brain represents the external world through the bottleneck of sensory organs. The network of hierarchically organized neurons is thought to recover the causes of sensory inputs to reconstruct the reality in the brain in idiosyncratic ways depending on individuals and their internal states. How can we understand the world model represented in an individual’s brain, or the neuroverse? My lab has been working on brain decoding of visual perception and subjective experiences such as imagery and dreaming using machine learning and deep neural network representations. In this talk, I will outline the progress of brain decoding methods and present how subjective experiences are externalized as images and how they could be shared across individuals via neural code conversion. The prospects of these approaches in basic science and neurotechnology will be discussed.
Cellular mechanisms of conscious processing
Recent breakthroughs in neurobiology indicate that time is ripe to understand the cellular-level mechanisms of conscious experience. Accordingly, we have recently proposed that conscious processing depends on the integration between top-down and bottom-up information streams and that there exists a specific cellular mechanism that gates this integration. I will first describe this cellular mechanism and demonstrate how it controls signal propagation within the thalamocortical system. Then I will show how this cellular-level mechanism provides a natural explanation for why conscious experience is modulated by top-down processing. Besides shining new light on the neural basis of consciousness, this perspective unravels the mechanisms of internally generated perception, such as dreams, imagery, and hallucinations.
Conflict in Multisensory Perception
Multisensory perception is often studied through the effects of inter-sensory conflict, such as in the McGurk effect, the Ventriloquist illusion, and the Rubber Hand Illusion. Moreover, Bayesian approaches to cue fusion and causal inference overwhelmingly draw on cross-modal conflict to measure and to model multisensory perception. Given the prevalence of conflict, it is remarkable that accounts of multisensory perception have so far neglected the theory of conflict monitoring and cognitive control, established about twenty years ago. I hope to make a case for the role of conflict monitoring and resolution during multisensory perception. To this end, I will present EEG and fMRI data showing that cross-modal conflict in speech, resulting in either integration or segregation, triggers neural mechanisms of conflict detection and resolution. I will also present data supporting a role of these mechanisms during perceptual conflict in general, using Binocular Rivalry, surrealistic imagery, and cinema. Based on this preliminary evidence, I will argue that it is worth considering the potential role of conflict in multisensory perception and its incorporation in a causal inference framework. Finally, I will raise some potential problems associated with this proposal.
Measuring relevant features of the social and physical environment with imagery
The efficacy of images to create quantitative measures of urban perception has been explored in psychology, social science, urban planning and architecture over the last 50 years. The ability to scale these measurements has become possible only in the last decade, due to increased urban surveillance in the form of street view and satellite imagery, and the accessibility of such data. This talk will present a series of projects which make use of imagery and CNNs to predict, measure and interpret the social and physical environments of our cities.
Synaesthesia as a Model System for Understanding Variation in the Human Mind and Brain
During this talk, I will seek to reposition synaesthesia as model system for understanding variation in the construction of the human mind and brain. People with synaesthesia inhabit a remarkable mental world in which numbers can be coloured, words can have tastes, and music is a visual spectacle. Synaesthesia has now been documented for over two hundred years but key questions remain unanswered about why it exists, and what such conditions might mean for theories of the human mind. I will argue that we need to rethink synaesthesia as not just representing exceptional experiences, but as a product of an unusual neurodevelopmental cascade from genes to brain to cognition of which synaesthesia is only one outcome. Rather than synaesthesia being a kind of 'dangling qualia' (atypical experiences attached to a typical mind/brain) it should be thought of as unusual experiences that accompany an unusual mind/brain. Specifically, differences in the brains of synaesthetes support a distinctive way of thinking (enhanced memory, imagery etc.) and may also predispose towards particular clinical vulnerabilities. It is this neurodiverse phenotype that is an important object of study in its own right and may explain any adaptive value for having synaesthesia.
Blurring the line between imagination and reality: Motor imagery influences performance of linked movements
FENS Forum 2024
Cortical effects of motor and tactile imagery assessed with TMS-EEG
FENS Forum 2024
Motor imagery among aphantasics and controls: A preliminary fNIRS study
FENS Forum 2024
The similarities and the differences between tactile imagery and tactile attention: Insights from high-density EEG data
FENS Forum 2024
Classifying Motor Imagery ECoG Signal With Optimal Selection Of Minimum Electrodes
Neuromatch 5
Spatio-temporal Graph Neural Networks for Motor Imagery EEG Classification
Neuromatch 5