Neural Computation
neural computation
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Eugenio Piasini
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department (https://www.sissa.it/research/neuroscience). The selection procedure is now open. The application deadline is 28 March 2024. To learn how to apply, please visit https://phdcns.sissa.it/admission-procedure . Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
University of Bristol
The role The School of Engineering Mathematics and Technology at the University of Bristol is seeking to appoint a Senior Lecturer / Associate Professor whose research encompasses neural computation, machine learning and AI. If you are earlier in your career the post is also available at Lecturer level. The University of Bristol is an exciting centre for research into the nature of computation and inference in humans, animals and machines. Our computational neuroscience group has made important contributions in, for example, Bayesian approaches to data and inference, biomimetic deep learning, anatomically-constrained neural networks and the theory of neural networks. The University has a long tradition of cross-disciplinary research and Computational Neuroscience is part of both the Bristol Neuroscience Network and the Intelligent Systems Group; we are recognised for our central role in the local neuroscience and machine learning/AI communities. You would be joining the University at an exciting time as we embark on a £500M investment in our new campus and while we create a home for the UK’s AI Research Resource with the UK’s most powerful supercomputer. We are committed to an inclusive and diverse environment where everyone can thrive. We welcome applicants from all backgrounds, especially those from under-represented communities. We offer flexible working arrangements to help balance professional and personal commitments. What will you be doing? You will conduct research at the interface between computational neuroscience and machine learning and contribute to the associated teaching on our degree programmes and to academic administration. You will take part in our lively research community and join our internationally renowned researchers in producing high-quality research with the potential to secure research funding.
Lukas Groschner
The Groschner lab studies signal processing in the brain using the fruit fly as a model. Our current research focuses on temporal patterns of neural activity that unfold over hundreds of milliseconds up to minutes. Under the umbrella of temporal signal processing, the successful applicant will address one of the following three questions: 1) What ion channel make-up and what circuit motifs allow neurons to delay signals by hundreds of milliseconds? 2) How does visual information accumulate over time to inform behavioural choice? 3) How does a brain construct a memory that is stable during times of immobility, but exquisitely malleable—sensitive to every step—during locomotion? The projects rely on a common set of experimental and computational approaches, which include behavioural assays, recordings and manipulations of neural activity in vivo, transcriptomic profiling of neuronal populations, and biophysically realistic modelling of neurons and circuits. The Groschner lab strives to foster an environment that welcomes, includes, and values people with diverse backgrounds and experiences. We provide all Postdoctoral Fellows with the support, space, and resources they need to pursue their goals and place and emphasis on furthering their careers. They will lead their own projects, contribute to other projects on a collaborative basis (both in the lab and with external collaborators) and may guide PhD students in their research. The ability to work in a team is essential. Responsibilities of the Postdoctoral Fellow include the following: 1) Undertake academic research and develop projects in a timely manner 2) Contribute ideas to the research programme 3) Adapt existing and develop new scientific techniques and experimental protocols 4) Use specialist scientific equipment in a laboratory environment 5) Acquire, analyse, and review scientific data to test and refine working hypotheses 6) Provide guidance and training to less experienced members of the research group 7) Develop ideas for generating research income, gather preliminary data, and present proposals to senior researchers 8) Contribute to the preparation of scientific reports and journal articles 9) Collaborate with colleagues in partner institutions and research groups 10) Attend and participate in academic activities such as lab meetings, journal clubs, wider network meetings, and retreats These duties are a guide to the work that the post holder will be required to undertake and may change with scientific developments.
Lukas Groschner
The Groschner lab studies signal processing in the brain using the fruit fly as a model. Our current research focuses on temporal patterns of neural activity that unfold over hundreds of milliseconds up to minutes. Under the umbrella of temporal signal processing, the successful applicant will address one of the following three questions: 1) What ion channel make-up and what circuit motifs allow neurons to delay signals by hundreds of milliseconds? 2) How does visual information accumulate over time to inform behavioural choice? 3) How does a brain construct a memory that is stable during times of immobility, but exquisitely malleable—sensitive to every step—during locomotion? The projects rely on a common set of experimental and computational approaches, which include behavioural assays, recordings and manipulations of neural activity in vivo, transcriptomic profiling of neuronal populations, and biophysically realistic modelling of neurons and circuits. The Groschner lab strives to foster an environment that welcomes, includes, and values people with diverse backgrounds and experiences. We provide all Postdoctoral Fellows with the support, space, and resources they need to pursue their goals and place and emphasis on furthering their careers. They will lead their own projects, contribute to other projects on a collaborative basis (both in the lab and with external collaborators) and may guide PhD students in their research. The ability to work in a team is essential.
Cognitive Neuroscience PhD group @ SISSA
SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience Department (https://phdcns.sissa.it/) hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. The Department is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 28 August 2023. To learn how to apply, please visit https://phdcns.sissa.it/admission-procedure . Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
Eugenio Piasini
SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. The Cognitive Neuroscience Department hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. The Department is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models.
N/A
The Department of Biology at Washington University in St. Louis seeks a neuroscientist for a tenure-track position at the Assistant or Associate Professor level. The successful candidate will establish a research program focused on cutting-edge questions in developmental, cellular or systems neuroscience with particular interest in neuroethology, biologically-inspired artificial intelligence, evolution, or neural computation. The successful candidate will: join a vibrant neuroscience community; contribute to advising, mentoring, and teaching; and develop an externally funded and internationally recognized research program.
Eugenio Piasini
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department.
Mathew Diamond
Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for current admission round are those funded by the 'Fondo Sociale Europeo Plus', accessible through the first link above.
Eugenio Piasini
Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2025. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, neuronal foundations of perceptual experience and learning in various sensory modalities, motivation and intelligence, language, and neural computation. Our research is highly interdisciplinary; our approaches include behavioral, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline for the spring admission round is 20 March 2025 at 1pm CET. Please apply here, and see the admission procedure page for more information. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Understanding the complex behaviors of the ‘simple’ cerebellar circuit
Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
The role of sub-population structure in computations through neural dynamics
Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.
From spikes to factors: understanding large-scale neural computations
It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.
Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties
Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Our results obtained in the fruit fly Drosophila demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
The role of population structure in computations through neural dynamics
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
Setting network states via the dynamics of action potential generation
To understand neural computation and the dynamics in the brain, we usually focus on the connectivity among neurons. In contrast, the properties of single neurons are often thought to be negligible, at least as far as the activity of networks is concerned. In this talk, I will contradict this notion and demonstrate how the biophysics of action-potential generation can have a decisive impact on network behaviour. Our recent theoretical work shows that, among regularly firing neurons, the somewhat unattended homoclinic type (characterized by a spike onset via a saddle homoclinic orbit bifurcation) particularly stands out: First, spikes of this type foster specific network states - synchronization in inhibitory and splayed-out/frustrated states in excitatory networks. Second, homoclinic spikes can easily be induced by changes in a variety of physiological parameters (like temperature, extracellular potassium, or dendritic morphology). As a consequence, such parameter changes can even induce switches in network states, solely based on a modification of cellular voltage dynamics. I will provide first experimental evidence and discuss functional consequences of homoclinic spikes for the design of efficient pattern-generating motor circuits in insects as well as for mammalian pathologies like febrile seizures. Our analysis predicts an interesting role for homoclinic action potentials as an integral part of brain dynamics in both health and disease.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Seeing the world through moving photoreceptors - binocular photomechanical microsaccades give fruit fly hyperacute 3D-vision
To move efficiently, animals must continuously work out their x,y,z positions with respect to real-world objects, and many animals have a pair of eyes to achieve this. How photoreceptors actively sample the eyes’ optical image disparity is not understood because this fundamental information-limiting step has not been investigated in vivo over the eyes’ whole sampling matrix. This integrative multiscale study will advance our current understanding of stereopsis from static image disparity comparison to a morphodynamic active sampling theory. It shows how photomechanical photoreceptor microsaccades enable Drosophila superresolution three-dimensional vision and proposes neural computations for accurately predicting these flies’ depth-perception dynamics, limits, and visual behaviors.
Open-source neurotechnologies for imaging cortex-wide neural activity in behaving animals
Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (+300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.
Interdisciplinary College
The Interdisciplinary College is an annual spring school which offers a dense state-of-the-art course program in neurobiology, neural computation, cognitive science/psychology, artificial intelligence, machine learning, robotics and philosophy. It is aimed at students, postgraduates and researchers from academia and industry. This year's focus theme "Flexibility" covers (but not be limited to) the nervous system, the mind, communication, and AI & robotics. All this will be packed into a rich, interdisciplinary program of single- and multi-lecture courses, and less traditional formats.
Synergy of color and motion vision for detecting approaching objects in Drosophila
I am working on color vision in Drosophila, identifying behaviors that involve color vision and understanding the neural circuits supporting them (Longden 2016). I have a long-term interest in understanding how neural computations operate reliably under changing circumstances, be they external changes in the sensory context, or internal changes of state such as hunger and locomotion. On internal state-modulation of sensory processing, I have shown how hunger alters visual motion processing in blowflies (Longden et al. 2014), and identified a role for octopamine in modulating motion vision during locomotion (Longden and Krapp 2009, 2010). On responses to external cues, I have shown how one kind of uncertainty in the motion of the visual scene is resolved by the fly (Saleem, Longden et al. 2012), and I have identified novel cells for processing translation-induced optic flow (Longden et al. 2017). I like working with colleagues who use different model systems, to get at principles of neural operation that might apply in many species (Ding et al. 2016, Dyakova et al. 2015). I like work motivated by computational principles - my background is computational neuroscience, with a PhD on models of memory formation in the hippocampus (Longden and Willshaw, 2007).
The organization of neural representations for control
Cognitive control allows us to think and behave flexibly based on our context and goals. Most theories of cognitive control propose a control representation that enables the same input to produce different outputs contingent on contextual factors. In this talk, I will focus on an important property of the control representation's neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. This tradeoff has important implications for cognitive control. In this talk, I will present initial evidence from fMRI and EEG showing that task representations in the human brain leverage both ends of this tradeoff during flexible behavior.
Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli
Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.
Inhibitory connectivity and computations in olfaction
We use the olfactory system and forebrain of (adult) zebrafish as a model to analyze how relevant information is extracted from sensory inputs, how information is stored in memory circuits, and how sensory inputs inform behavior. A series of recent findings provides evidence that inhibition has not only homeostatic functions in neuronal circuits but makes highly specific, instructive contributions to behaviorally relevant computations in different brain regions. These observations imply that the connectivity among excitatory and inhibitory neurons exhibits essential higher-order structure that cannot be determined without dense network reconstructions. To analyze such connectivity we developed an approach referred to as “dynamical connectomics” that combines 2-photon calcium imaging of neuronal population activity with EM-based dense neuronal circuit reconstruction. In the olfactory bulb, this approach identified specific connectivity among co-tuned cohorts of excitatory and inhibitory neurons that can account for the decorrelation and normalization (“whitening”) of odor representations in this brain region. These results provide a mechanistic explanation for a fundamental neural computation that strictly requires specific network connectivity.
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Technologies for large scale cortical imaging and electrophysiology
Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (>300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.
How do parts influence the whole? How do parts influence the whole? Neural computation, from single neuron to behaviour
With the Donders Inclusion Seminars, we celebrate diversity. Please join us on March 17th, 2021 at 15.00 (CET) as we next welcome Dr. Fleur Zeldenrust of the Donders Institute.
Simons-Emory Workshop on Neural Dynamics: What could neural dynamics have to say about neural computation, and do we know how to listen?
Speakers will deliver focused 10-minute talks, with periods reserved for broader discussion on topics at the intersection of neural dynamics and computation. Organizer & Moderator: Chethan Pandarinath - Emory University and Georgia Tech Speakers & Discussants: Adrienne Fairhall - U Washington Mehrdad Jazayeri - MIT John Krakauer - John Hopkins Francesca Mastrogiuseppe - Gatsby / UCL Abigail Person - U Colorado Abigail Russo - Princeton Krishna Shenoy - Stanford Saurabh Vyas - Columbia
Residual population dynamics as a window into neural computation
Neural activity in frontal and motor cortices can be considered to be the manifestation of a dynamical system implemented by large neural populations in recurrently connected networks. The computations emerging from such population-level dynamics reflect the interaction between external inputs into a network and its internal, recurrent dynamics. Isolating these two contributions in experimentally recorded neural activity, however, is challenging, limiting the resulting insights into neural computations. I will present an approach to addressing this challenge based on response residuals, i.e. variability in the population trajectory across repetitions of the same task condition. A complete characterization of residual dynamics is well-suited to systematically compare computations across brain areas and tasks, and leads to quantitative predictions about the consequences of small, arbitrary causal perturbations.
Crowding and the Architecture of the Visual System
Classically, vision is seen as a cascade of local, feedforward computations. This framework has been tremendously successful, inspiring a wide range of ground-breaking findings in neuroscience and computer vision. Recently, feedforward Convolutional Neural Networks (ffCNNs), inspired by this classic framework, have revolutionized computer vision and been adopted as tools in neuroscience. However, despite these successes, there is much more to vision. I will present our work using visual crowding and related psychophysical effects as probes into visual processes that go beyond the classic framework. In crowding, perception of a target deteriorates in clutter. We focus on global aspects of crowding, in which perception of a small target is strongly modulated by the global configuration of elements across the visual field. We show that models based on the classic framework, including ffCNNs, cannot explain these effects for principled reasons and identify recurrent grouping and segmentation as a key missing ingredient. Then, we show that capsule networks, a recent kind of deep learning architecture combining the power of ffCNNs with recurrent grouping and segmentation, naturally explain these effects. We provide psychophysical evidence that humans indeed use a similar recurrent grouping and segmentation strategy in global crowding effects. In crowding, visual elements interfere across space. To study how elements interfere over time, we use the Sequential Metacontrast psychophysical paradigm, in which perception of visual elements depends on elements presented hundreds of milliseconds later. We psychophysically characterize the temporal structure of this interference and propose a simple computational model. Our results support the idea that perception is a discrete process. Together, the results presented here provide stepping-stones towards a fuller understanding of the visual system by suggesting architectural changes needed for more human-like neural computations.
Can subjective experience be quantified? Critically examining computational cognitive neuroscience approaches
Computational and cognitive neuroscience techniques have made great strides towards describing the neural computations underlying perceptual inference and decision-making under uncertainty. These tools tell us how and why perceptual illusions occur, which brain areas may represent noisy information in a probabilistic manner, and so on. However, an understanding of the subjective, qualitative aspects of perception remains elusive: qualia, or the personal, intrinsic properties of phenomenal awareness, have remained out of reach of these computational analytic insights. Here, I propose that metacognitive computations, and the subjective feelings that go along with them, give us a solid starting point for understanding subjective experience in general. Specifically, perceptual metacognition possesses ontological and practical properties that provide a powerful and unique opportunity for studying the studying the neural and computational correlates of subjective experience using established tools of computational and cognitive neuroscience. By capitalizing on decades of developments in formal computational model comparisons as applied to the specific properties of perceptual metacognition, we are now in a privileged position to reveal new and exciting insights about how the brain constructs our subjective conscious experiences.
An Algorithmic Barrier to Neural Circuit Understanding
Neuroscience is witnessing extraordinary progress in experimental techniques, especially at the neural circuit level. These advances are largely aimed at enabling us to understand precisely how neural circuit computations mechanistically cause behavior. Establishing this type of causal understanding will require multiple perturbational (e.g optogenetic) experiments. It has been unclear exactly how many such experiments are needed and how this number scales with the size of the nervous system in question. Here, using techniques from Theoretical Computer Science, we prove that establishing the most extensive notions of understanding need exponentially-many experiments in the number of neurons, in many cases, unless a widely-posited hypothesis about computation is false (i.e. unless P = NP). Furthermore, using data and estimates, we demonstrate that the feasible experimental regime is typically one where the number of experiments performable scales sub-linearly in the number of neurons in the nervous system. This remarkable gulf between the worst-case and the feasible suggests an algorithmic barrier to such an understanding. Determining which notions of understanding are algorithmically tractable to establish in what contexts, thus, becomes an important new direction for investigation. TL; DR: Non-existence of tractable algorithms for neural circuit interrogation could pose a barrier to comprehensively understanding how neural circuits cause behavior. Preprint: https://biorxiv.org/content/10.1101/639724v1/…
Using noise to probe recurrent neural network structure and prune synapses
Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.
Synthesizing Machine Intelligence in Neuromorphic Computers with Differentiable Programming
The potential of machine learning and deep learning to advance artificial intelligence is driving a quest to build dedicated computers, such as neuromorphic hardware that emulate the biological processes of the brain. While the hardware technologies already exist, their application to real-world tasks is hindered by the lack of suitable programming methods. Advances at the interface of neural computation and machine learning showed that key aspects of deep learning models and tools can be transferred to biologically plausible neural circuits. Building on these advances, I will show that differentiable programming can address many challenges of programming spiking neural networks for solving real-world tasks, and help devise novel continual and local learning algorithms. In turn, these new algorithms pave the road towards systematically synthesizing machine intelligence in neuromorphic hardware without detailed knowledge of the hardware circuits.
Disentangling the roles of dimensionality and cell categories in neural computations
The description of neural computations currently relies on two competing views: (i) a classical single-cell view that aims to relate the activity of individual neurons to sensory or behavioural variables, and organize them into functional classes; (ii) a more recent population view that instead characterises computations in terms of collective neural trajectories, and focuses on the dimensionality of these trajectories as animals perform tasks. How the two key concepts of functional cell classes and low-dimensional trajectories interact to shape neural computations is however at present not understood. Here I will address this question by combining machine-learning tools for training recurrent neural networks with reverse-engineering and theoretical analyses of network dynamics.
Geometry of Neural Computation Unifies Working Memory and Planning
Cognitive tasks typically require the integration of working memory, contextual processing, and planning to be carried out in close coordination. However, these computations are typically studied within neuroscience as independent modular processes in the brain. In this talk I will present an alternative view, that neural representations of mappings between expected stimuli and contingent goal actions can unify working memory and planning computations. We term these stored maps contingency representations. We developed a "conditional delayed logic" task capable of disambiguating the types of representations used during performance of delay tasks. Human behaviour in this task is consistent with the contingency representation, and not with traditional sensory models of working memory. In task-optimized artificial recurrent neural network models, we investigated the representational geometry and dynamical circuit mechanisms supporting contingency-based computation, and show how contingency representation explains salient observations of neuronal tuning properties in prefrontal cortex. Finally, our theory generates novel and falsifiable predictions for single-unit and population neural recordings.
Building mechanistic models of neural computations with simulation-based machine learning
Bernstein Conference 2024
Context-Dependent Epoch Codes in Association Cortex Shape Neural Computations
COSYNE 2023
Dynamical Neural Computation in Predictive Sensorimotor Control
COSYNE 2023
Sparse Component Analysis: An interpretable dimensionality reduction tool that identifies building blocks of neural computation
COSYNE 2023
A flexible and interpretable statistical model of distributed neural computation
COSYNE 2025
Sparse autoencoders for mechanistic insights on neural computation in naturalistic experiments
COSYNE 2025
Sparse Component Analysis: An interpretable dimensionality reduction tool that identifies building blocks of neural computation
Neuromatch 5