Neural Processing
neural processing
Latest
Predictive processing: a circuit approach to psychosis
Predictive processing is a computational framework that aims to explain how the brain processes sensory information by making predictions about the environment and minimizing prediction errors. It can also be used to explain some of the key symptoms of psychotic disorders such as schizophrenia. In my talk, I will provide an overview of our progress in this endeavor.
Signatures of criticality in efficient coding networks
The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related
Context-dependent motion processing in the retina
A critical function of sensory systems is to reliably extract ethologically relevant features from the complex natural environment. A classic model to study feature detection is the direction-selective circuit of the mammalian retina. In this talk, I will discuss our recent work on how visual contexts dynamically influence the neural processing of motion signals in the direction-selective circuit in the mouse retina.
Dynamic spatial processing in insect vision
How does the visual system of insects function in vastly different light intensities, process separate parts of the visual field in parallel, and cope with eye sizes that differ between individuals? This talk will give you the answers we receive from our unique(ly adorable) model system: hawkmoths.
Opponent processing in the expanded retinal mosaic of Nymphalid butterflies
In many butterflies, the ancestral trichromatic insect colour vision, based on UV-, blue- and green-sensitive photoreceptors, is extended with red-sensitive cells. Physiological evidence for red receptors has been missing in nymphalid butterflies, although some species can discriminate red hues well. In eight species from genera Archaeoprepona, Argynnis, Charaxes, Danaus, Melitaea, Morpho, Heliconius and Speyeria, we found a novel class of green-sensitive photoreceptors that have hyperpolarizing responses to stimulation with red light. These green-positive, red-negative (G+R–) cells are allocated to positions R1/2, normally occupied by UV and blue-sensitive cells. Spectral sensitivity, polarization sensitivity and temporal dynamics suggest that the red opponent units (R–) are the basal photoreceptors R9, interacting with R1/2 in the same ommatidia via direct inhibitory synapses. We found the G+R– cells exclusively in butterflies with red-shining ommatidia, which contain longitudinal screening pigments. The implementation of the red colour channel with R9 is different from pierid and papilionid butterflies, where cells R5–8 are the red receptors. The nymphalid red-green opponent channel and the potential for tetrachromacy seem to have been switched on several times during evolution, balancing between the cost of neural processing and the value of extended colour information.
NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks
Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.
Spiking Neural networks as Universal Function Approximators - SNUFA 2021
Like last year this online workshop brings together researchers in the field to present their work and discuss ways of translating these findings into a better understanding of neural circuits. Topics include artificial and biologically plausible learning algorithms and the dissection of trained spiking circuits toward understanding neural processing. We have a manageable number of talks with ample time for discussions. This year’s executive committee comprises Chiara Bartolozzi, Sander Bohté, Dan Goodman, and Friedemann Zenke.
Molecular, receptor, and neural bases for chemosensory-mediated sexual and social behavior in mice
For many animals, the sense of olfaction plays a major role in controlling sexual behaviors. Olfaction helps animals to detect mates, discriminate their status, and ultimately, decide on their behavioral output such as courtship behavior or aggression. Specific pheromone cues and receptors have provided a useful model to study how sensory inputs are converted into certain behavioral outputs. With the aid of recent advances in tools to record and manipulate genetically defined neurons, our understanding of the neural basis of sexual and social behavior has expanded substantially. I will discuss the current understanding of the neural processing of sex pheromones and the neural circuitry which controls sexual and social behaviors and ultimately reproduction, by focusing on rodent studies, mainly in mice, and the vomeronasal sensory system.
Imaging memory consolidation in wakefulness and sleep
New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.
Decoding the neural processing of speech
Understanding speech in noisy backgrounds requires selective attention to a particular speaker. Humans excel at this challenging task, while current speech recognition technology still struggles when background noise is loud. The neural mechanisms by which we process speech remain, however, poorly understood, not least due to the complexity of natural speech. Here we describe recent progress obtained through applying machine-learning to neuroimaging data of humans listening to speech in different types of background noise. In particular, we develop statistical models to relate characteristic features of speech such as pitch, amplitude fluctuations and linguistic surprisal to neural measurements. We find neural correlates of speech processing both at the subcortical level, related to the pitch, as well as at the cortical level, related to amplitude fluctuations and linguistic structures. We also show that some of these measures allow to diagnose disorders of consciousness. Our findings may be applied in smart hearing aids that automatically adjust speech processing to assist a user, as well as in the diagnosis of brain disorders.
neural processing coverage
10 items