explanatory gap
Latest
Neurosurgery & Consciousness: Bridging Science and Philosophy in the Age of AI
Overview of neurosurgery specialty interplay between neurology, psychiatry and neurosurgery. Discussion on benefits and disadvantages of classifications. Presentation of sub-specialties: trauma, oncology, functional, pediatric, vascular and spine. How does an ordinary day of a neurosurgeon look like; outpatient clinic, emergencies, pre/intra/post operative patient care. An ordinary operation. Myth-busting and practical insights of every day practice. An ordinary operation. Hint for research on clinical problems to be solved. The coming ethical frontiers of neuroprosthetics. In part two we will explore the explanatory gap and its significance. We will review the more than 200 theories of the hard problem of consciousness, from the prevailing to the unconventional. Finally, we are going to reflect on the AI advancements and the claims of LLMs becoming conscious
A multi-level account of hippocampal function in concept learning from behavior to neurons
A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.
explanatory gap coverage
2 items