Theoretical Computer Science
theoretical computer science
Latest
A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens
We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119
An Algorithmic Barrier to Neural Circuit Understanding
Neuroscience is witnessing extraordinary progress in experimental techniques, especially at the neural circuit level. These advances are largely aimed at enabling us to understand precisely how neural circuit computations mechanistically cause behavior. Establishing this type of causal understanding will require multiple perturbational (e.g optogenetic) experiments. It has been unclear exactly how many such experiments are needed and how this number scales with the size of the nervous system in question. Here, using techniques from Theoretical Computer Science, we prove that establishing the most extensive notions of understanding need exponentially-many experiments in the number of neurons, in many cases, unless a widely-posited hypothesis about computation is false (i.e. unless P = NP). Furthermore, using data and estimates, we demonstrate that the feasible experimental regime is typically one where the number of experiments performable scales sub-linearly in the number of neurons in the nervous system. This remarkable gulf between the worst-case and the feasible suggests an algorithmic barrier to such an understanding. Determining which notions of understanding are algorithmically tractable to establish in what contexts, thus, becomes an important new direction for investigation. TL; DR: Non-existence of tractable algorithms for neural circuit interrogation could pose a barrier to comprehensively understanding how neural circuits cause behavior. Preprint: https://biorxiv.org/content/10.1101/639724v1/…
theoretical computer science coverage
2 items
Explore how theoretical computer science research is advancing inside Neuro.
Visit domain