Information Processing
information processing
Prof. Dr. Dominik Endres
We are looking for a scientist who will strengthen the research focus of the Department of Psychology by setting up their own research group and who will actively participate in collaborations and research initiatives at the Department of Psychology and the Philipps University. The professorship will develop computational methods for modeling and evaluating human behavior, efficient information processing, adaptation to the environment and interaction with the environment. The professorship builds a bridge to computer science and thus supports the AI-initiative of the Philipps University of Marburg.
What it’s like is all there is: The value of Consciousness
Over the past thirty years or so, cognitive neuroscience has made spectacular progress understanding the biological mechanisms of consciousness. Consciousness science, as this field is now sometimes called, was not only inexistent thirty years ago, but its very name seemed like an oxymoron: how can there be a science of consciousness? And yet, despite this scepticism, we are now equipped with a rich set of sophisticated behavioural paradigms, with an impressive array of techniques making it possible to see the brain in action, and with an ever-growing collection of theories and speculations about the putative biological mechanisms through which information processing becomes conscious. This is all good and fine, even promising, but we also seem to have thrown the baby out with the bathwater, or at least to have forgotten it in the crib: consciousness is not just mechanisms, it’s what it feels like. In other words, while we know thousands of informative studies about access-consciousness, we have little in the way of phenomenal consciousness. But that — what it feels like — is truly what “consciousness” is about. Understanding why it feels like something to be me and nothing (panpsychists notwithstanding) for a stone to be a stone is what the field has always been after. However, while it is relatively easy to study access-consciousness through the contrastive approach applied to reports, it is much less clear how to study phenomenology, its structure and its function. Here, I first overview work on what consciousness does (the "how"). Next, I ask what difference feeling things makes and what function phenomenology might play. I argue that subjective experience has intrinsic value and plays a functional role in everything that we do.
Diffuse coupling in the brain - A temperature dial for computation
The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of largescale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
Consciousness in the age of mechanical minds
We are now clearly entering a new age in our relationship with machines. The power of AI natural language processors and image generators has rapidly exceeded the expectations of even those who developed them. Serious questions are now being asked about the extent to which machines could become — or perhaps already are — sentient or conscious. Do AI machines understand the instructions they are given and the answers they provide? In this talk I will consider the prospects for conscious machines, by which I mean machines that have feelings, know about their own existence, and about ours. I will suggest that the recent focus on information processing in models of consciousness, in which the brain is treated as a kind of digital computer, have mislead us about the nature of consciousness and how it is produced in biological systems. Treating the brain as an energy processing system is more likely to yield answers to these fundamental questions and help us understand how and when machines might become minds.
The balanced brain: two-photon microscopy of inhibitory synapse formation
Coordination between excitatory and inhibitory synapses (providing positive and negative signals respectively) is required to ensure proper information processing in the brain. Many brain disorders, especially neurodevelopental disorders, are rooted in a specific disturbance of this coordination. In my research group we use a combination of two-photon microscopy and electrophisiology to examine how inhibitory synapses are fromed and how this formation is coordinated with nearby excitatroy synapses.
Quasicriticality and the quest for a framework of neuronal dynamics
Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.
Signatures of criticality in efficient coding networks
The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related
Precise spatio-temporal spike patterns in cortex and model
The cell assembly hypothesis postulates that groups of coordinated neurons form the basis of information processing. Here, we test this hypothesis by analyzing massively parallel spiking activity recorded in monkey motor cortex during a reach-to-grasp experiment for the presence of significant ms-precise spatio-temporal spike patterns (STPs). For this purpose, the parallel spike trains were analyzed for STPs by the SPADE method (Stella et al, 2019, Biosystems), which detects, counts and evaluates spike patterns for their significance by the use of surrogates (Stella et al, 2022 eNeuro). As a result we find STPs in 19/20 data sets (each of 15min) from two monkeys, but only a small fraction of the recorded neurons are involved in STPs. To consider the different behavioral states during the task, we analyzed the data in a quasi time-resolved analysis by dividing the data into behaviorally relevant time epochs. The STPs that occur in the various epochs are specific to behavioral context - in terms of neurons involved and temporal lags between the spikes of the STP. Furthermore we find, that the STPs often share individual neurons across epochs. Since we interprete the occurrence of a particular STP as the signature of a particular active cell assembly, our interpretation is that the neurons multiplex their cell assembly membership. In a related study, we model these findings by networks with embedded synfire chains (Kleinjohann et al, 2022, bioRxiv 2022.08.02.502431).
Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings
Brain networks exist within the confines of resource limitations. As a result, a brain network must overcome metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. To observe the effect of these processes, we introduce the spatially-embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a 3D Euclidean space, where the communication of constituent neurons is constrained by a sparse connectome. We find that seRNNs, similar to primate cerebral cortices, naturally converge on solving inferences using modular small-world networks, in which functionally similar units spatially configure themselves to utilize an energetically-efficient mixed-selective code. As all these features emerge in unison, seRNNs reveal how many common structural and functional brain motifs are strongly intertwined and can be attributed to basic biological optimization processes. seRNNs can serve as model systems to bridge between structural and functional research communities to move neuroscientific understanding forward.
Circuit solutions for programming actions
The hippocampus is one of the few regions in the adult mammalian brain which is endowed with life-long neurogenesis. Despite intense investigation, it remains unclear how neurons newly-generated may retain unique functions that contribute to modulate hippocampal information processing and cognition. In this talk, I will present some recent findings revealing how enhanced forms of plasticity in adult-born neurons underlie the way they become incorporated into pre-existing networks in response to experience.
Real-world scene perception and search from foveal to peripheral vision
A high-resolution central fovea is a prominent design feature of human vision. But how important is the fovea for information processing and gaze guidance in everyday visual-cognitive tasks? Following on from classic findings for sentence reading, I will present key results from a series of eye-tracking experiments in which observers had to search for a target object within static or dynamic images of real-world scenes. Gaze-contingent scotomas were used to selectively deny information processing in the fovea, parafovea, or periphery. Overall, the results suggest that foveal vision is less important and peripheral vision is more important for scene perception and search than previously thought. The importance of foveal vision was found to depend on the specific requirements of the task. Moreover, the data support a central-peripheral dichotomy in which peripheral vision selects and central vision recognizes.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
The functional architecture of the human entorhinal-hippocampal circuitry
Cognitive functions like episodic memory require the formation of cohesive representations. Critical for that process is the entorhinal-hippocampal circuitry’s interaction with cortical information streams and the circuitry’s inner communication. With ultra-high field functional imaging we investigated the functional architecture of the human entorhinal-hippocampal circuitry. We identified an organization that is consistent with convergence of information in anterior and lateral entorhinal subregions and the subiculum/CA1 border while keeping a second route specific for scene processing in a posterior-medial entorhinal subregion and the distal subiculum. Our findings agree with information flow along information processing routes which functionally split the entorhinal-hippocampal circuitry along its transversal axis. My talk will demonstrate how ultra-high field imaging in humans can bridge the gap between anatomical and electrophysiological findings in rodents and our understanding of human cognition. Moreover, I will point out the implications that basic research on functional architecture has for cognitive and clinical research perspectives.
The role of top-down mechanisms in gaze perception
Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.
Molecular Logic of Synapse Organization and Plasticity
Connections between nerve cells called synapses are the fundamental units of communication and information processing in the brain. The accurate wiring of neurons through synapses into neural networks or circuits is essential for brain organization. Neuronal networks are sculpted and refined throughout life by constant adjustment of the strength of synaptic communication by neuronal activity, a process known as synaptic plasticity. Deficits in the development or plasticity of synapses underlie various neuropsychiatric disorders, including autism, schizophrenia and intellectual disability. The Siddiqui lab research program comprises three major themes. One, to assess how biochemical switches control the activity of synapse organizing proteins, how these switches act through their binding partners and how these processes are regulated to correct impaired synaptic function in disease. Two, to investigate how synapse organizers regulate the specificity of neuronal circuit development and how defined circuits contribute to cognition and behaviour. Three, to address how synapses are formed in the developing brain and maintained in the mature brain and how microcircuits formed by synapses are refined to fine-tune information processing in the brain. Together, these studies have generated fundamental new knowledge about neuronal circuit development and plasticity and enabled us to identify targets for therapeutic intervention.
Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics
Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.
Cognitive Maps
Ample evidence suggests that the brain generates internal simulations of the outside world to guide our thoughts and actions. These mental representations, or cognitive maps, are thought to be essential for our very comprehension of reality. I will discuss what is known about the informational structure of cognitive maps, their neural underpinnings, and how they relate to behavior, evolution, disease, and the current revolution in artificial intelligence.
Cross-modality imaging of the neural systems that support executive functions
Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.
How does the metabolically-expensive mammalian brain adapt to food scarcity?
Information processing is energetically expensive. In the mammalian brain, it is unclear how information coding and energy usage are regulated during food scarcity. I addressed this in the visual cortex of awake mice using whole-cell recordings and two-photon imaging to monitor layer 2/3 neuronal activity and ATP usage. I found that food restriction reduced synaptic ATP usage by 29% through a decrease in AMPA receptor conductance. Neuronal excitability was nonetheless preserved by a compensatory increase in input resistance and a depolarized resting membrane potential. Consequently, neurons spiked at similar rates as controls, but spent less ATP on underlying excitatory currents. This energy-saving strategy had a cost since it amplified the variability of visually-evoked subthreshold responses, leading to a 32% broadening in orientation tuning and impaired fine visual discrimination. This reduction in coding precision was associated with reduced levels of the fat mass-regulated hormone leptin and was restored by exogenous leptin supplementation. These findings reveal novel mechanisms that dynamically regulate energy usage and coding precision in neocortex.
From natural scene statistics to multisensory integration: experiments, models and applications
To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.
Online "From Bench to Bedside" Neurosciences Symposium
2 Keynote lectures :“Homeostatic control of sleep in the fly"and “Management of Intracerebral Haemorrhage – where is the evidence?” and 2 sessions: "Cortical top-down information processing” and “Virtual/augmented reality and its implications for the clinic”
Input and target-selective plasticity in sensory neocortex during learning
Behavioral experience shapes neural circuits, adding and subtracting connections between neurons that will ultimately control sensation and perception. We are using natural sensory experience to uncover basic principles of information processing in the cerebral cortex, with a focus on how sensory learning can selectively alter synaptic strength. I will discuss recent findings that differentiate reinforcement learning from sensory experience, showing rapid and selective plasticity of thalamic and inhibitory synapses within primary sensory cortex.
A Flash of Darkness within Dusk: Crossover inhibition in the mouse retina
To survive in the wild small rodents evolved specialized retinas. To escape predators, looming shadows need to be detected with speed and precision. To evade starvation, small seeds, grass, nuts and insects need to also be detected quickly. Some of these succulent seeds and insects may be camouflaged offering only low contrast targets.Moreover, these challenging tasks need to be accomplished continuously at dusk, night, dawn and daytime. Crossover inhibition is thought to be involved in enhancing contrast detectionin the microcircuits of the inner plexiform layer of the mammalian retina. The AII amacrine cells are narrow field cells that play a key role in crossover inhibition. Our lab studies the synaptic physiology that regulates glycine release from AII amacrine cellsin mouse retina. These interneurons receive excitation from rod and conebipolar cells and transmit excitation to ON-type bipolar cell terminals via gap junctions. They also transmit inhibition via multiple glycinergic synapses onto OFF bipolar cell terminals.AII amacrine cells are thus a central hub of synaptic information processing that cross links the ON and the OFF pathways. What are the functions of crossover inhibition? How does it enhance contrast detection at different ambient light levels? How is the dynamicrange, frequency response and synaptic gain of glycine release modulated by luminance levels and circadian rhythms? How is synaptic gain changed by different extracellular neuromodulators, like dopamine, and by intracellular messengers like cAMP, phosphateand Ca2+ ions from Ca2+ channels and Ca2+ stores? My talk will try to answer some of these questions and will pose additional ones. It will end with further hypothesis and speculations on the multiple roles of crossover inhibition.
Suboptimal human inference inverts the bias-variance trade-off for decisions with asymmetric evidence
Solutions to challenging inference problems are often subject to a fundamental trade-off between bias (being systematically wrong) that is minimized with complex inference strategies and variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to the frequently suboptimal inference strategies used by humans. We examined inference problems involving rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that were suboptimal relative to the Bayesian ideal observer. These suboptimal strategies reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but high bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but displayed weaker, near-normative bias. Our results yield new insights into the principles that govern individual differences in behavior that depends on rare-event inference, and, more generally, about the information-processing trade-offs that are sensitive to not just the complexity, but also the optimality of the inference process.
Migraine: a disorder of excitatory-inhibitory balance in multiple brain networks? Insights from genetic mouse models of the disease
Migraine is much more than an episodic headache. It is a complex brain disorder, characterized by a global dysfunction in multisensory information processing and integration. In a third of patients, the headache is preceded by transient sensory disturbances (aura), whose neurophysiological correlate is cortical spreading depression (CSD). The molecular, cellular and circuit mechanisms of the primary brain dysfunctions that underlie migraine onset, susceptibility to CSD and altered sensory processing remain largely unknown and are major open issues in the neurobiology of migraine. Genetic mouse models of a rare monogenic form of migraine with aura provide a unique experimental system to tackle these key unanswered questions. I will describe the functional alterations we have uncovered in the cerebral cortex of genetic mouse models and discuss the insights into the cellular and circuit mechanisms of migraine obtained from these findings.
Demystifying the richness of visual perception
Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.
Tuning dumb neurons to task processing - via homeostasis
Homeostatic plasticity plays a key role in stabilizing neural network activity. But what is its role in neural information processing? We showed analytically how homeostasis changes collective dynamics and consequently information flow - depending on the input to the network. We then studied how input and homeostasis on a recurrent network of LIF neurons impacts information flow and task performance. We showed how we can tune the working point of the network, and found that, contrary to previous assumptions, there is not one optimal working point for a family of tasks, but each task may require its own working point.
Context-Dependent Relationships between Locus Coeruleus Firing Patterns and Coordinated Neural Activity in the Anterior Cingulate Cortex
Ascending neuromodulatory projections from the locus coeruleus (LC) affect cortical neural networks via the release of norepinephrine (NE). However, the exact nature of these neuromodulatory effects on neural activity patterns in vivo is not well understood. Here we show that in awake monkeys, LC activation is associated with changes in coordinated activity patterns in the anterior cingulate cortex (ACC). These relationships, which are largely independent of changes in firing rates of individual ACC neurons, depend on the type of LC activation: ACC pairwise correlations tend to be reduced when tonic (baseline) LC activity increases but are enhanced when external events drive phasic LC responses. Both relationships covary with pupil changes that reflect LC activation and arousal. These results suggest that modulations of information processing that reflect changes in coordinated activity patterns in cortical networks can result partly from ongoing, context-dependent, arousal-related changes in activation of the LC-NE system.
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Seeing with technology: Exchanging the senses with sensory substitution and augmentation
What is perception? Our sensory modalities transmit information about the external world into electrochemical signals that somehow give rise to our conscious experience of our environment. Normally there is too much information to be processed in any given moment, and the mechanisms of attention focus the limited resources of the mind to some information at the expense of others. My research has advanced from first examining visual perception and attention to now examine how multisensory processing contributes to perception and cognition. There are fundamental constraints on how much information can be processed by the different senses on their own and in combination. Here I will explore information processing from the perspective of sensory substitution and augmentation, and how "seeing" with the ears and tongue can advance fundamental and translational research.
Neocortex saves energy by reducing coding precision during food scarcity
Information processing is energetically expensive. In the mammalian brain, it is unclear how information coding and energy usage are regulated during food scarcity. We addressed this in the visual cortex of awake mice using whole-cell patch clamp recordings and two-photon imaging to monitor layer 2/3 neuronal activity and ATP usage. We found that food restriction resulted in energy savings through a decrease in AMPA receptor conductance, reducing synaptic ATP usage by 29%. Neuronal excitability was nonetheless preserved by a compensatory increase in input resistance and a depolarized resting membrane potential. Consequently, neurons spiked at similar rates as controls, but spent less ATP on underlying excitatory currents. This energy-saving strategy had a cost since it amplified the variability of visually-evoked subthreshold responses, leading to a 32% broadening in orientation tuning and impaired fine visual discrimination. These findings reveal novel mechanisms that dynamically regulate energy usage and coding precision in neocortex.
Information Dynamics in the Hippocampus and Cortex and their alterations in epilepsy
Neurological disorders share common high-level alterations, such as cognitive deficits, anxiety, and depression. This raises the possibility of fundamental alterations in the way information conveyed by neural firing is maintained and dispatched in the diseased brain. Using experimental epilepsy as a model of neurological disorder we tested the hypothesis of altered information processing, analyzing how neurons in the hippocampus and the entorhinal cortex store and exchange information during slow and theta oscillations. We equate the storage and sharing of information to low level, or primitive, information processing at the algorithmic level, the theoretical intermediate level between structure and function. We find that these low-level processes are organized into substates during brain states marked by theta and slow oscillations. Their internal composition and organization through time are disrupted in epilepsy, losing brain state-specificity, and shifting towards a regime of disorder in a brain region dependent manner. We propose that the alteration of information processing at an algorithmic level may be a mechanism behind the emergent and widespread co-morbidities associated with epilepsy, and perhaps other disorders.
Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software
Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.
Integration of „environmental“ information in the neuronal epigenome
The inhibitory actions of the heterogeneous collection of GABAergic interneurons tremendously influence cortical information processing, which is reflected by diseases like autism, epilepsy and schizophrenia that involve defects in cortical inhibition. Apart from the regulation of physiological processes like synaptic transmission, proper interneuron function also relies on their correct development. Hence, decrypting regulatory networks that direct proper cortical interneuron development as well as adult functionality is of great interest, as this helps to identify critical events implicated in the etiology of the aforementioned diseases. Thereby, extrinsic factors modulate these processes and act on cell- and stage-specific transcriptional programs. Herein, epigenetic mechanisms of gene regulation, like DNA methylation executed by DNA methyltransferases (DNMTs), histone modifications and non-coding RNAs, call increasing attention in integrating “environmental information” in our genome and sculpting physiological processes in the brain relevant for human mental health. Several studies associate altered expression levels and function of the DNA methyltransferase 1 (DNMT1) in subsets of embryonic and adult cortical interneurons in patients diagnosed with schizophrenia. Although accumulating evidence supports the relevance of epigenetic signatures for instructing cell type-specific development, only very little is known about their functional implications in discrete developmental processes and in subtype-specific maturation of cortical interneurons. Similarly, little is known about the role of DNMT1 in regulating adult interneurons functionality. This talk will provide an overview about newly identified and roles DNMT1 has in orchestrating cortical interneuron development and adult function. Further, this talk will report about the implications of lncRNAs in mediating site-specific DNA methylation in response to discrete external stimuli.
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Differential working memory functioning
The integrated conflict monitoring theory of Botvinick introduced cognitive demand into conflict monitoring research. We investigated effects of individual differences of cognitive demand and another determinant of conflict monitoring entitled reinforcement sensitivity on conflict monitoring. We showed evidence of differential variability of conflict monitoring intensity using the electroencephalogram (EEG), functional magnet resonance imaging (fMRI) and behavioral data. Our data suggest that individual differences of anxiety and reasoning ability are differentially related to the recruitment of proactive and reactive cognitive control (cf. Braver). Based on previous findings, the team of the Leue-Lab investigated new psychometric data on conflict monitoring and proactive-reactive cognitive control. Moreover, data of the Leue-Lab suggest the relevance of individual differences of conflict monitoring for the context of deception. In this respect, we plan new studies highlighting individual differences of the functioning of the Anterior Cingulate Cortex (ACC). Disentangling the role of individual differences in working memory-related cognitive demand, mental effort, and reinforcement-related processes opens new insights for cognitive-motivational approaches of information processing (Passcode to rewatch: 0R8v&m59).
Multi-scale synaptic analysis for psychiatric/emotional disorders
Dysregulation of emotional processing and its integration with cognitive functions are central features of many mental/emotional disorders associated both with externalizing problems (aggressive, antisocial behaviors) and internalizing problems (anxiety, depression). As Dr. Joseph LeDoux, our invited speaker of this program, wrote in his famous book “Synaptic self: How Our Brains Become Who We Are”—the brain’s synapses—are the channels through which we think, act, imagine, feel, and remember. Synapses encode the essence of personality, enabling each of us to function as a distinctive, integrated individual from moment to moment. Thus, exploring the functioning of synapses leads to the understanding of the mechanism of (patho)physiological function of our brain. In this context, we have investigated the pathophysiology of psychiatric disorders, with particular emphasis on the synaptic function of model mice of various psychiatric disorders such as schizophrenia, autism, depression, and PTSD. Our current interest is how synaptic inputs are integrated to generate the action potential. Because the spatiotemporal organization of neuronal firing is crucial for information processing, but how thousands of inputs to the dendritic spines drive the firing remains a central question in neuroscience. We identified a distinct pattern of synaptic integration in the disease-related models, in which extra-large (XL) spines generate NMDA spikes within these spines, which was sufficient to drive neuronal firing. We experimentally and theoretically observed that XL spines negatively correlated with working memory. Our work offers a whole new concept for dendritic computation and network dynamics, and the understanding of psychiatric research will be greatly reconsidered. The second half of my talk is the development of a novel synaptic tool. Because, no matter how beautifully we can illuminate the spine morphology and how accurately we can quantify the synaptic integration, the links between synapse and brain function remain correlational. In order to challenge the causal relationship between synapse and brain function, we established AS-PaRac1, which is unique not only because it can specifically label and manipulate the recently potentiated dendritic spine (Hayashi-Takagi et al, 2015, Nature). With use of AS-PaRac1, we developed an activity-dependent simultaneous labeling of the presynaptic bouton and the potentiated spines to establish “functional connectomics” in a synaptic resolution. When we apply this new imaging method for PTSD model mice, we identified a completely new functional neural circuit of brain region A→B→C with a very strong S/N in the PTSD model mice. This novel tool of “functional connectomics” and its photo-manipulation could open up new areas of emotional/psychiatric research, and by extension, shed light on the neural networks that determine who we are.
Making memories in mice
Understanding how the brain uses information is a fundamental goal of neuroscience. Several human disorders (ranging from autism spectrum disorder to PTSD to Alzheimer’s disease) may stem from disrupted information processing. Therefore, this basic knowledge is not only critical for understanding normal brain function, but also vital for the development of new treatment strategies for these disorders. Memory may be defined as the retention over time of internal representations gained through experience, and the capacity to reconstruct these representations at later times. Long-lasting physical brain changes (‘engrams’) are thought to encode these internal representations. The concept of a physical memory trace likely originated in ancient Greece, although it wasn’t until 1904 that Richard Semon first coined the term ‘engram’. Despite its long history, finding a specific engram has been challenging, likely because an engram is encoded at multiple levels (epigenetic, synaptic, cell assembly). My lab is interested in understanding how specific neurons are recruited or allocated to an engram, and how neuronal membership in an engram may change over time or with new experience. Here I will describe both older and new unpublished data in our efforts to understand memories in mice.
An in-silico framework to study the cholinergic modulation of the neocortex
Neuromodulators control information processing in cortical microcircuits by regulating the cellular and synaptic physiology of neurons. Computational models and detailed simulations of neocortical microcircuitry offer a unifying framework to analyze the role of neuromodulators on network activity. In the present study, to get a deeper insight in the organization of the cortical neuropil for modeling purposes, we quantify the fiber length per cortical volume and the density of varicosities for catecholaminergic, serotonergic and cholinergic systems using immunocytochemical staining and stereological techniques. The data obtained are integrated into a biologically detailed digital reconstruction of the rodent neocortex (Markram et al, 2015) in order to model the influence of modulatory systems on the activity of the somatosensory cortex neocortical column. Simulations of ascending modulation of network activity in our model predict the effects of increasing levels of neuromodulators on diverse neuron types and synapses and reveal a spectrum of activity states. Low levels of neuromodulation drive microcircuit activity into slow oscillations and network synchrony, whereas high neuromodulator concentrations govern fast oscillations and network asynchrony. The models and simulations thus provide a unifying in silico framework to study the role of neuromodulators in reconfiguring network activity.
Combining two mechanisms to produce neural firing rate homeostasis
The typical goal of homeostatic mechanisms is to ensure a system operates at or in the vicinity of a stable set point, where a particular measure is relatively constant and stable. Neural firing rate homeostasis is unusual in that a set point of fixed firing rate is at odds with the goal of a neuron to convey information, or produce timed motor responses, which require temporal variations in firing rate. Therefore, for a neuron, a range of firing rates is required for optimal function, which could, for example, be set by a dual system that controls both mean and variance of firing rate. We explore, both via simulations and analysis, how two experimentally measured mechanisms for firing rate homeostasis can cooperate to improve information processing and avoid the pitfall of pulling in different directions when their set points do not appear to match.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. As an example, we focus on the ability to execute learned actions in a reliable and stable manner. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. The stable latent dynamics, once identified, allows for the prediction of various behavioral features, using models whose parameters remain fixed throughout long timespans. We posit that latent cortical dynamics within the manifold are the fundamental and stable building blocks underlying consistent behavioral execution.
Learning to perceive with new sensory signals
I will begin by describing recent research taking a new, model-based approach to perceptual development. This approach uncovers fundamental changes in information processing underlying the protracted development of perception, action, and decision-making in childhood. For example, integration of multiple sensory estimates via reliability-weighted averaging – widely used by adults to improve perception – is often not seen until surprisingly late into childhood, as assessed by both behaviour and neural representations. This approach forms the basis for a newer question: the scope for the nervous system to deploy useful computations (e.g. reliability-weighted averaging) to optimise perception and action using newly-learned sensory signals provided by technology. Our initial model system is augmenting visual depth perception with devices translating distance into auditory or vibro-tactile signals. This problem has immediate applications to people with partial vision loss, but the broader question concerns our scope to use technology to tune in to any signal not available to our native biological receptors. I will describe initial progress on this problem, and our approach to operationalising what it might mean to adopt a new signal comparably to a native sense. This will include testing for its integration (weighted averaging) alongside the native senses, assessing the level at which this integration happens in the brain, and measuring the degree of ‘automaticity’ with which new signals are used, compared with native perception.
Memory, learning to learn, and control of cognitive representations
Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.
Memory, learning to learn, and control of cognitive representations
Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.
The collective behavior of the clonal raider ant: computations, patterns, and naturalistic behavior
Colonies of ants and other eusocial insects are superorganisms, which perform sophisticated cognitive-like functions at the level of the group. In my talk I will review our efforts to establish the clonal raider ant Ooceraea biroi as a lab model system for the systematic study of the principles underlying collective information processing in ant colonies. I will use results from two separate projects to demonstrate the potential of this model system: In the first, we analyze the foraging behavior of the species, known as group raiding: a swift offensive response of a colony to the detection of a potential prey by a scout. By using automated behavioral tracking and detailed analysis we show that this behavior is closely related to the army ant mass raid, an iconic collective behavior in which hundreds of thousands of ants spontaneously leave the nest to go hunting, and that the evolutionary transition between the two can be explained by a change in colony size alone. In the second project, we study the emergence of a collective sensory response threshold in a colony. The sensory threshold is a fundamental computational primitive, observed across many biological systems. By carefully controlling the sensory environment and the social structure of the colonies we were able to show that it also appear in a collective context, and that it emerges out of a balance between excitatory and inhibitory interactions between ants. Furthermore, by using a mathematical model we predict that these two interactions can be mapped into known mechanisms of communication in ants. Finally, I will discuss the opportunities for understanding collective behavior that are opening up by the development of methods for neuroimaging and neurocontrol of our ants.
Inhibitory neural circuit mechanisms underlying neural coding of sensory information in the neocortex
Neural codes, such as temporal codes (precisely timed spikes) and rate codes (instantaneous spike firing rates), are believed to be used in encoding sensory information into spike trains of cortical neurons. Temporal and rate codes co-exist in the spike train and such multiplexed neural code-carrying spike trains have been shown to be spatially synchronized in multiple neurons across different cortical layers during sensory information processing. Inhibition is suggested to promote such synchronization, but it is unclear whether distinct subtypes of interneurons make different contributions in the synchronization of multiplexed neural codes. To test this, in vivo single-unit recordings from barrel cortex were combined with optogenetic manipulations to determine the contributions of parvalbumin (PV)- and somatostatin (SST)-positive interneurons to synchronization of precisely timed spike sequences. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (<12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contribution to feedforward and feedback inhibition, respectively. Overall, these results show that PV and SST interneurons have distinct frequency (rate code)-selective roles in dynamically gating the synchronization of spike times (temporal code) through preferentially recruiting feedforward and feedback inhibitory circuit motifs. The inhibitory neural circuit mechanisms we uncovered here his may have critical roles in regulating neural code-based somatosensory information processing in the neocortex.
Correlations, chaos, and criticality in neural networks
The remarkable properties of information-processing of biological and of artificial neuronal networks alike arise from the interaction of large numbers of neurons. A central quest is thus to characterize their collective states. The directed coupling between pairs of neurons and their continuous dissipation of energy, moreover, cause dynamics of neuronal networks outside thermodynamic equilibrium. Tools from non-equilibrium statistical mechanics and field theory are thus instrumental to obtain a quantitative understanding. We here present progress with this recent approach [1]. On the experimental side, we show how correlations between pairs of neurons are informative on the dynamics of cortical networks: they are poised near a transition to chaos [2]. Close to this transition, we find prolongued sequential memory for past signals [3]. In the chaotic regime, networks offer representations of information whose dimensionality expands with time. We show how this mechanism aids classification performance [4]. Together these works illustrate the fruitful interplay between theoretical physics, neuronal networks, and neural information processing.
The interaction of sensory and motor information to shape neuronal representations in mouse cortical networks
The neurons in our brain never function in isolation; they are organized into complex circuits which perform highly specialized information processing tasks and transfer information through large neuronal networks. The aim of Janelle Pakan's research group is to better understand how neural circuits function during the transformation of information from sensory perception to behavioural output. Importantly, they also aim to further understand the cell-type specific processes that interrupt the flow of information through neural circuits in neurodegenerative disorders with dementia. The Pakan group utilizes innovative neuroanatomical tracing techniques, advanced in vivo two-photon imaging, and genetically targeted manipulations of neuronal activity to investigate the cell-type specific microcircuitry of the cerebral cortex, the macrocircuitry of cortical output to subcortical structures, and the functional circuitry underlying processes of sensory perception and motor behaviour.
Leveraging olfaction to understand how the brain and the body generate social behavior
Courtship behavior is an innate model for many types of brain computations including sensory detection, learning and memory, and internal state modulation. Despite the robustness of the behavior, we have little understanding of the underlying neural circuits and mechanisms. The Stowers’ lab is leveraging the ability of specialized olfactory cues, pheromones, to specifically activate and therefore identify and study courtship circuits in the mouse. We are interested in identifying general circuit principles (specific brain nodes and information flow) that are common to all individuals, in order to additionally study how experience, gender, age, and internal state modulate and personalize behavior. We are solving two parallel sensory to motor courtship circuits, that promote social vocal calling and scent marking, to study information processing of behavior as a complete unit instead of restricting focus to a single brain region. We expect comparing and contrasting the coding logic of two courtship motor behaviors will begin to shed light on general principles of how the brain senses context, weighs experience and responds to internal state to ultimately decide appropriate action.
Human cognitive biases and the role of dopamine
Cognitive bias is a "subjective reality" that is uniquely created in the brain and affects our various behaviors. It may lead to what is widely called irrationality in behavioral economics, such as inaccurate judgment and illogical interpretation, but it also has an adaptive aspect in terms of mental hygiene. When such cognitive bias is regarded as a product of information processing in the brain, the approach to clarify the mechanism in the brain will play a part in finding the direct relations between the brain and the mind. In my talk, I will introduce our studies investigating the neural and molecular bases of cognitive biases, especially focusing on the role of dopamine.
Contextual modulation of cortical processing by a higher-order thalamic input
Higher-order thalamic nuclei have extensive connections with various cortical areas. Yet their functionals roles remain not well understood. In our recent studies, using optogenetic and chemogenetic tools we manipulated the activity of a higher-order thalamic nucleus, the lateral posterior nucleus (LP, analogous to the primate pulvinar nucleus) and its projections and examined the effects on sensory discrimination and information processing functions in the cortex. We found an overall suppressive effect on layer 2/3 pyramidal neurons in the cortex, resulting in enhancements of sensory feature selectivities. These mechanisms are in place in contextual modulation of cortical processing, as well as in cross-modality modulation of sensory processing.
Influence of cortical and neuromodulatory loops on sensory information processing and perception in the mouse olfactory system
Dynamic regulation of information processing in thalamus
Following the energy in cellular information processing
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
Information and Decision-Making
In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk. This perception-action loop picture of an agent's life cycle is well established and expounded especially in the context of Fuster's sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective. On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes. In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
Synaptic, cellular, and circuit mechanisms for learning: insights from electric fish
Understanding learning in neural circuits requires answering a number of difficult questions: (1) What is the computation being performed and what is its behavioral significance? (2) What are the inputs required for the computation and how are they represented at the level of spikes? (3) What are the sites and rules governing plasticity, i.e. how do pre and post-synaptic activity patterns produce persistent changes in synaptic strength? (4) How does network connectivity and dynamics shape the computation being performed? I will discuss joint experimental and theoretical work addressing these questions in the context of the electrosensory lobe (ELL) of weakly electric mormyrid fish.
Networks thinking themselves
Human learners acquire not only disconnected bits of information, but complex interconnected networks of relational knowledge. The capacity for such learning naturally depends on the architecture of the knowledge network itself, and also on the architecture of the computational unit – the brain – that encodes and processes the information. Here, I will discuss emerging work assessing network constraints on the learnability of relational knowledge, and the neural correlates of that learning.
A neural model for hierarchical and counterfactual information processing inspired by human behavior
COSYNE 2023
Dissociation between sensory and goal-directed information processing in prefrontal, visual, and parietal cortices in non-human primates
FENS Forum 2024
Estimating the effect of NMDA receptors on network-level oscillations and information processing
FENS Forum 2024
Layers, Folds, and Semi-Neuronal Information Processing
Neuromatch 5