← Back

Neural Systems

Topic spotlight
TopicWorld Wide

neural systems

Discover seminars, jobs, and research tagged with neural systems across World Wide.
28 curated items20 Seminars5 Positions2 Conferences1 ePoster
Updated 1 day ago
28 items · neural systems
28 results
Position

Gatsby Computational Neuroscience Unit

Gatsby Computational Neuroscience Unit, UCL
London, UK
Dec 5, 2025

4-Year PhD Programme in Theoretical Neuroscience and Machine Learning Call for Applications! Deadline: 13 November 2022 The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning; inference, coding and neural dynamics; Bayesian and kernel methods; deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical and computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment. Full funding is available regardless of nationality. The unit also welcomes applicants who have secured or are seeking funding from other sources. To apply, please visit www.ucl.ac.uk/gatsby/study-and-work/phd-programme

PositionComputational Neuroscience

Prof. Dr. Laurenz Wiskott

Institute for Neural Computation, Faculty of Computer Science, Ruhr-University Bochum
Ruhr-University Bochum, Germany
Dec 5, 2025

The Institute for Neural Computation is looking for a postdoc in the field of Computational Neuroscience. The position is part of the group 'Theory of Neural Systems' and offers the opportunity to develop your own research profile and establish an independent research group. The research topic should be in the field of computational neuroscience on a system level, in particular modeling the visual system, episodic memory, or navigation in mammals. Collaborations with colleagues at the institute are welcome. The tasks include independent research projects and publications, acquiring third party funding, teaching, supervising student projects and your own PhD projects, and active participation in the local research environment.

PositionComputer Science

Dr. Amir Aly

Center for Robotics and Neural Systems (CRNS), School of Engineering, Computing, and Mathematics, University of Plymouth
University of Plymouth, UK
Dec 5, 2025

The University of Plymouth has several available positions in Computer Science.

PositionComputational Neuroscience

Katharina Wilmes

Institute of Neuroinformatics, UZH and ETH Zürich
Zürich, Switzerland
Dec 5, 2025

We are looking for highly motivated Postdocs or PhD students, interested in computational neuroscience, specifically addressing questions concerning neural circuits underlying perception and learning. The perfect candidate has a strong background in math, physics or computer science (or equivalent), programming skills (python), and a strong interest in biological and neural systems. A background in computational neuroscience is ideal, but not mandatory. Our brain maintains an internal model of the world, based on which it can make predictions about sensory information. These predictions are useful for perception and learning in the uncertain and changing environments in which we evolved. The link between high-level normative theories and cellular-level observations of prediction errors and representations under uncertainty is still missing. The lab uses computational and mathematical tools to model cortical circuits and neural networks on different scales.

SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 15, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscienceRecording

Signatures of criticality in efficient coding networks

Shervin Safavi
Dayan lab, MPI for Biological Cybernetics
May 2, 2023

The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related

SeminarNeuroscienceRecording

Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity

Thomas Limbacher
TU Graz
Nov 8, 2022

Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.

SeminarNeuroscience

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll
Stanford University
Aug 24, 2022

Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.

Conference

COSYNE 2022

Lisbon, Portugal
Mar 17, 2022

The annual Cosyne meeting provides an inclusive forum for the exchange of empirical and theoretical approaches to problems in systems neuroscience, in order to understand how neural systems function:contentReference[oaicite:2]{index=2}. The main meeting is single-track, with invited talks selected by the Executive Committee and additional talks and posters selected by the Program Committee based on submitted abstracts:contentReference[oaicite:3]{index=3}. The workshops feature in-depth discussion of current topics of interest in a small group setting:contentReference[oaicite:4]{index=4}.

SeminarNeuroscienceRecording

Cross-modality imaging of the neural systems that support executive functions

Yaara Erez
Affiliate MRC Cognition and Brain Sciences Unit, University of Cambridge
Feb 28, 2022

Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.

SeminarNeuroscienceRecording

Robustness in spiking networks: a geometric perspective

Christian Machens
Champalimaud Center, Lisboa
Feb 15, 2022

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.

SeminarNeuroscienceRecording

NMC4 Short Talk: An optogenetic theory of stimulation near criticality

Brandon Benson
Stanford University
Dec 1, 2021

Recent advances in optogenetics allow for stimulation of neurons with sub-millisecond spike jitter and single neuron selectivity. Already this precision has revealed new levels of cortical sensitivity: stimulating tens of neurons can yield changes in the mean firing rate of thousands of similarly tuned neurons. This extreme sensitivity suggests that cortical dynamics are near criticality. Criticality is often studied in neural systems as a non-equilibrium thermodynamic process in which scale-free patterns of activity, called avalanches, emerge between distinct states of spontaneous activity. While criticality is well studied, it is still unclear what these distinct states of spontaneous activity are and what responses we expect from stimulation of this activity. By answering these questions, optogenetic stimulation will become a new avenue for approaching criticality and understanding cortical dynamics. Here, for the first time, we study the effects of optogenetic-like stimulation on a model near criticality. We study a model of Inhibitory/Excitatory (I/E) Leaky Integrate and Fire (LIF) spiking neurons which display a region of high sensitivity as seen in experiments. We find that this region of sensitivity is, indeed, near criticality. We derive the Dynamic Mean Field Theory of this model and find that the distinct states of activity are asynchrony and synchrony. We use our theory to characterize response to various types and strengths of optogenetic stimulation. Our model and theory predict that asynchronous, near-critical dynamics can have two qualitatively different responses to stimulation: one characterized by high sensitivity, discrete event responses, and high trial-to-trial variability, and another characterized by low sensitivity, continuous responses with characteristic frequencies, and low trial-to-trial variability. While both response types may be considered near-critical in model space, networks which are closest to criticality show a hybrid of these response effects.

SeminarNeuroscience

Memory, learning to learn, and control of cognitive representations

André Fenton
New York University
May 6, 2021

Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.

SeminarNeuroscienceRecording

Memory, learning to learn, and control of cognitive representations

André Fenton
New York University
May 6, 2021

Biological neural networks can represent information in the collective action potential discharge of neurons, and store that information amongst the synaptic connections between the neurons that both comprise the network and govern its function. The strength and organization of synaptic connections adjust during learning, but many cognitive neural systems are multifunctional, making it unclear how continuous activity alternates between the transient and discrete cognitive functions like encoding current information and recollecting past information, without changing the connections amongst the neurons. This lecture will first summarize our investigations of the molecular and biochemical mechanisms that change synaptic function to persistently store spatial memory in the rodent hippocampus. I will then report on how entorhinal cortex-hippocampus circuit function changes during cognitive training that creates memory, as well as learning to learn in mice. I will then describe how the hippocampus system operates like a competitive winner-take-all network, that, based on the dominance of its current inputs, self organizes into either the encoding or recollection information processing modes. We find no evidence that distinct cells are dedicated to those two distinct functions, rather activation of the hippocampus information processing mode is controlled by a subset of dentate spike events within the network of learning-modified, entorhinal-hippocampus excitatory and inhibitory synapses.

SeminarNeuroscience

Race and the brain: Insights from the neural systems of emotion and decisions

Elizabeth Phelps
Harvard University
Apr 28, 2021

Investigations of the neural systems mediating the processing of social groups defined by race, specifically Black and White race groups in American participants, reveals significant overlap with brain mechanisms involved in emotion. This talk will provide an overview of research on the neuroscience of race and emotion, focusing on implicit race attitudes. Implicit race attitudes are expressed without conscious effort and control, and contrast with explicit, conscious attitudes. In spite of sharp decline in the expression of explicit, negative attitudes towards outgroup race members over the last half century, negative implicit attitudes persist, even in the face of strong egalitarian goals and beliefs. Early research demonstrated that implicit, but not explicit, negative attitudes towards outgroup race members correlate with blood oxygenation level dependent (BOLD) signal in the amygdala – a region implicated in threat representations, as well as emotion’s influence on cognition. Building on this initial finding, we demonstrate how learning and decisions may be modulated by implicit race attitudes and involve neural systems mediating emotion, learning and choice. Finally, we discuss techniques that may diminish the unintentional expression of negative, implicit race attitudes.

SeminarNeuroscienceRecording

Sensory-motor control, cognition and brain evolution: exploring the links

Robert Barton
Durham University
Mar 24, 2021

Drawing on recent findings from evolutionary anthropology and neuroscience, professor Barton will lead us through the amazing story of the evolution of human cognition. Usingstatistical, phylogenetic analyses that tease apart the variation associated with different neural systems and due to different selection pressures, he will be addressing intriguing questions like ‘Why are there so many neurons in the cerebellum?’, ‘Is the neocortex the ‘intelligent’ bit of the brain?’, and ‘What explains that the recognition by humans of emotional expressions is disrupted by trancranial magnetic stimulation of the somatosensory cortex?’ Could, as professor Barton suggests, the cerebellum -modestly concealed beneath the volumetrically dominating neocortex and largely ignored- turn out to be the Cinderella of the study of brain evolution?

SeminarNeuroscience

Neural circuit parameter variability, robustness, and homeostasis

Astrid Prinz
Emory University
Mar 11, 2021

Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.

SeminarNeuroscienceRecording

Receptor Costs Determine Retinal Design

Simon Laughlin
University of Cambridge
Jan 24, 2021

Our group is interested in discovering design principles that govern the structure and function of neurons and neural circuits. We record from well-defined neurons, mainly in flies’ visual systems, to measure the molecular and cellular factors that determine relevant measures of performance, such as representational capacity, dynamic range and accuracy. We combine this empirical approach with modelling to see how the basic elements of neural systems (ion channels, second messengers systems, membranes, synapses, neurons, circuits and codes) combine to determine performance. We are investigating four general problems. How are circuits designed to integrate information efficiently? How do sensory adaptation and synaptic plasticity contribute to efficiency? How do the sizes of neurons and networks relate to energy consumption and representational capacity? To what extent have energy costs shaped neurons, sense organs and brain regions during evolution?

SeminarNeuroscience

Neural systems for vocal perception

Catherine Perrodin
Institute of Behavioural Neuroscience, University College London
Jan 11, 2021

For social animals, successfully communicating with others is essential for interactions and survival. My research aims to answer a central question on the neuronal basis of this ability, from the perspective of the listener: how do our brains enable us to communicate with each other? My work develops nonhuman animal models to study the behavioural and neuronal mechanisms underlying the perception of vocal patterns. I will start by providing an overview of my past research characterizing the neuronal-level substrates of voice processing along the primate temporal lobe. I will then focus on my current work on vocal perception in mice, in which I utilize natural male-female courtship behaviour to evaluate the acoustic dimensions extracted by listeners from ultrasonic sequences. I will then talk about ongoing work investigating the neuronal substrates supporting the perception of behaviourally relevant acoustic cues from mouse vocal sequences.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

SeminarNeuroscience

Rapid State Changes Account for Apparent Brain and Behavior Variability

David McCormick
University of Oregon
Sep 16, 2020

Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.

SeminarNeuroscienceRecording

Understanding the visual demands of underwater habitats for aquatic animals used in neuroscience research

Tod Thiele and Dr. Emily Cooper
Tod Thiele: University of Toronto Scarborough; Emily Cooper: University of California, Berkeley
Jul 9, 2020

Zebrafish and cichlids are popular models in visual neuroscience, due to their amenability to advanced research tools and their diverse set of visually guided behaviours. It is often asserted that animals’ neural systems are adapted to the statistical regularities in their natural environments, but relatively little is known about the visual spatiotemporal features in the underwater habitats that nurtured these fish. To address this gap, we have embarked on an examination of underwater habitats in northeastern India and Lake Tanganyika (Zambia), where zebrafish and cichlids are native. In this talk, we will describe the methods used to conduct a series of field measurements and generate a large and diverse dataset of these underwater habitats. We will present preliminary results suggesting that the demands for visually-guided navigation differ between these underwater habitats and the terrestrial habitats characteristic of other model species.

SeminarNeuroscienceRecording

Computational Models of Large-Scale Brain Networks - Dynamics & Function

Jorge Mejias
University of Amsterdam
Apr 21, 2020

Theoretical and computational models of neural systems have been traditionally focused on small neural circuits, given the lack of reliable data on large-scale brain structures. The situation has started to change in recent years, with novel recording technologies and large organized efforts to describe the brain at a larger scale. In this talk, Professor Mejias from the University of Amsterdam will review his recent work on developing anatomically constrained computational models of large-scale cortical networks of monkeys, and how this approach can help to answer important questions in large-scale neuroscience. He will focus on three main aspects: (i) the emergence of functional interactions in different frequency regimes, (ii) the role of balance for efficient large-scale communication, and (iii) new paradigms of brain function, such as working memory, in large-scale networks.

ePoster

Neural Systems Underlying the Implementation of Working Memory Removal Operations

Jacob DeRosa

Neuromatch 5