Subjective Experiences
subjective experiences
Why Some Intelligent Agents are Conscious
In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. First, the agent is stipulated to be capable of (meta)representing subjective ‘qualitative’ sensory information, in the sense that it can easily assess how exactly similar a sensory signal is to all other possible sensory signals. This information is subjective in the sense that it concerns how the different stimuli can be distinguished by the agent itself, rather than how physically similar they are. For this to happen, sensory coding needs to satisfy sparsity and smoothness constraints, which are known to facilitate metacognition and generalization. Second, this qualitative information can under some specific circumstances acquire an ‘assertoric force’. This happens when a certain self-monitoring mechanism decides that the qualitative information reliably tracks the current state of the world, and informs a general symbolic reasoning system of this fact. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, the perceptual content presents itself as reflecting the state of the world right now, in ways that seem undeniably rational to the agent. At the same time, without effort, the agent also knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this architecture, for which consciousness might have arisen as a byproduct.
NMC4 Keynote:
The brain represents the external world through the bottleneck of sensory organs. The network of hierarchically organized neurons is thought to recover the causes of sensory inputs to reconstruct the reality in the brain in idiosyncratic ways depending on individuals and their internal states. How can we understand the world model represented in an individual’s brain, or the neuroverse? My lab has been working on brain decoding of visual perception and subjective experiences such as imagery and dreaming using machine learning and deep neural network representations. In this talk, I will outline the progress of brain decoding methods and present how subjective experiences are externalized as images and how they could be shared across individuals via neural code conversion. The prospects of these approaches in basic science and neurotechnology will be discussed.
Towards a Translational Neuroscience of Consciousness
The cognitive neuroscience of conscious perception has seen considerable growth over the past few decades. Confirming an influential hypothesis driven by earlier studies of neuropsychological patients, we have found that the lateral and polar prefrontal cortices play important causal roles in the generation of subjective experiences. However, this basic empirical finding has been hotly contested by researchers with different theoretical commitments, and the differences are at times difficult to resolve. To address the controversies, I suggest one alternative venue may be to look for clinical applications derived from current theories. I outline an example in which we used closed-loop fMRI combined with machine learning to nonconsciously manipulate the physiological responses to threatening stimuli, such as spiders or snakes. A clinical trial involving patients with phobia is currently taking place. I also outline how this theoretical framework may be extended to other diseases. Ultimately, a truly meaningful understanding of the fundamental nature of our mental existence should lead to useful insights for our colleagues on the clinical frontlines. If we use this as a yardstick, whoever loses the esoteric theoretical debates, both science and the patients will always win.