Cortical Areas
cortical areas
Circuit Mechanisms of Remote Memory
Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.
Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions
Understanding how macroscale brain dynamics are shaped by microscale mechanisms is crucial in neuroscience. We investigate this relationship in animal models by directly manipulating cellular properties and measuring whole-brain responses using resting-state fMRI. Specifically, we explore the impact of chemogenetically neuromodulating D1 medium spiny neurons in the dorsomedial caudate putamen (CPdm) on BOLD dynamics within a striato-thalamo-cortical circuit in mice. Our findings indicate that CPdm neuromodulation alters BOLD dynamics in thalamic subregions projecting to the dorsomedial striatum, influencing both local and inter-regional connectivity in cortical areas. This study contributes to understanding structure–function relationships in shaping inter-regional communication between subcortical and cortical levels.
Neuronal population interactions between brain areas
Most brain functions involve interactions among multiple, distinct areas or nuclei. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Using a population approach, we found that interactions between early visual cortical areas (V1 and V2) occur through a low-dimensional bottleneck, termed a communication subspace. In this talk, I will focus on the statistical methods we have developed for studying interactions between brain areas. First, I will describe Delayed Latents Across Groups (DLAG), designed to disentangle concurrent, bi-directional (i.e., feedforward and feedback) interactions between areas. Second, I will describe an extension of DLAG applicable to three or more areas, and demonstrate its utility for studying simultaneous Neuropixels recordings in areas V1, V2, and V3. Our results provide a framework for understanding how neuronal population activity is gated and selectively routed across brain areas.
Identifying mechanisms of cognitive computations from spikes
Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.
Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach
The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).
Transcriptional controls over projection neuron fate diversity
The cerebral cortex is the most evolved structure of the brain and the site for higher cognitive functions. It consists of 6 layers, each composed of specific types of neurons. Interconnectivity between cortical areas is critical for sensory integration and sensorimotor transformation. Inter-areal cortical projection neurons are located in all cortical layers and form a heterogeneous population, which send their axon across cortical areas, both within and across hemispheres. How this diversity emerges during development remains largely unknown. Here, we address this question by linking the connectome and transcriptome of developing cortical projection neurons and show distinct maturation paces in neurons with distinct projections, which correlates with the sequential development of sensory and motor functions during postnatal period.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Cognitive experience alters cortical involvement in navigation decisions
The neural correlates of decision-making have been investigated extensively, and recent work aims to identify under what conditions cortex is actually necessary for making accurate decisions. We discovered that mice with distinct cognitive experiences, beyond sensory and motor learning, use different cortical areas and neural activity patterns to solve the same task, revealing past learning as a critical determinant of whether cortex is necessary for decision tasks. We used optogenetics and calcium imaging to study the necessity and neural activity of multiple cortical areas in mice with different training histories. Posterior parietal cortex and retrosplenial cortex were mostly dispensable for accurate performance of a simple navigation-based visual discrimination task. In contrast, these areas were essential for the same simple task when mice were previously trained on complex tasks with delay periods or association switches. Multi-area calcium imaging showed that, in mice with complex-task experience, single-neuron activity had higher selectivity and neuron-neuron correlations were weaker, leading to codes with higher task information. Therefore, past experience is a key factor in determining whether cortical areas have a causal role in decision tasks.
Frontal circuit specialisations for information search and decision making
During primate evolution, prefrontal cortex (PFC) expanded substantially relative to other cortical areas. The expansion of PFC circuits likely supported the increased cognitive abilities of humans and anthropoids to sample information about their environment, evaluate that information, plan, and decide between different courses of action. What quantities do these circuits compute as information is being sampled towards and a decision is being made? And how can they be related to anatomical specialisations within and across PFC? To address this, we recorded PFC activity during value-based decision making using single unit recording in non-human primates and magnetoencephalography in humans. At a macrocircuit level, we found that value correlates differ substantially across PFC subregions. They are heavily shaped by each subregion’s anatomical connections and by the decision-maker’s current locus of attention. At a microcircuit level, we found that the temporal evolution of value correlates can be predicted using cortical recurrent network models that temporally integrate incoming decision evidence. These models reflect the fact that PFC circuits are highly recurrent in nature and have synaptic properties that support persistent activity across temporally extended cognitive tasks. Our findings build upon recent work describing economic decision making as a process of attention-weighted evidence integration across time.
What does the primary visual cortex tell us about object recognition?
Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.
Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex
During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.
NMC4 Short Talk: Different hypotheses on the role of the PFC in solving simple cognitive tasks
Low-dimensional population dynamics can be observed in neural activity recorded from the prefrontal cortex (PFC) of subjects performing simple cognitive tasks. Many studies have shown that recurrent neural networks (RNNs) trained on the same tasks can reproduce qualitatively these state space trajectories, and have used them as models of how neuronal dynamics implement task computations. The PFC is also viewed as a conductor that organizes the communication between cortical areas and provides contextual information. It is then not clear what is its role in solving simple cognitive tasks. Do the low-dimensional trajectories observed in the PFC really correspond to the computations that it performs? Or do they indirectly reflect the computations occurring within the cortical areas projecting to the PFC? To address these questions, we modelled cortical areas with a modular RNN and equipped it with a PFC-like cognitive system. When trained on cognitive tasks, this multi-system brain model can reproduce the low-dimensional population responses observed in neuronal activity as well as classical RNNs. Qualitatively different mechanisms can emerge from the training process when varying some details of the architecture such as the time constants. In particular, there is one class of models where it is the dynamics of the cognitive system that is implementing the task computations, and another where the cognitive system is only necessary to provide contextual information about the task rule as task performance is not impaired when preventing the system from accessing the task inputs. These constitute two different hypotheses about the causal role of the PFC in solving simple cognitive tasks, which could motivate further experiments on the brain.
Merging of cues and hunches by the mouse cortex
Many everyday decisions are based on both external cues and internal hunches. How does the brain put these together? We addressed this question in mice trained to make decisions based on sensory stimuli and on past events. While mice made these decisions, we causally probed the roles of cortical areas and recorded from thousands of neurons throughout the brain, with an emphasis on frontal cortex. The results are not what we thought based on textbook notions of how the brain works. This talk is based on work led by Nick Steinmetz, Peter Zatka-Haas, Armin Lak, and Pip Coen, in the laboratory I share with Kenneth Harris
Do you hear what I see: Auditory motion processing in blind individuals
Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. As an example, we focus on the ability to execute learned actions in a reliable and stable manner. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. The stable latent dynamics, once identified, allows for the prediction of various behavioral features, using models whose parameters remain fixed throughout long timespans. We posit that latent cortical dynamics within the manifold are the fundamental and stable building blocks underlying consistent behavioral execution.
Frontal circuit specialisations for decision making
During primate evolution, prefrontal cortex (PFC) expanded substantially relative to other cortical areas. The expansion of PFC circuits likely supported the increased cognitive abilities of humans and anthropoids to plan, evaluate, and decide between different courses of action. But what do these circuits compute as a decision is being made, and how can they be related to anatomical specialisations within and across PFC? To address this, we recorded PFC activity during value-based decision making using single unit recording in non-human primates and magnetoencephalography in humans. At a macrocircuit level, we found that value correlates differ substantially across PFC subregions. They are heavily shaped by each subregion’s anatomical connections and by the decision-maker’s current locus of attention. At a microcircuit level, we found that the temporal evolution of value correlates can be predicted using cortical recurrent network models that temporally integrate incoming decision evidence. These models reflect the fact that PFC circuits are highly recurrent in nature and have synaptic properties that support persistent activity across temporally extended cognitive tasks. Our findings build upon recent work describing economic decision making as a process of attention-weighted evidence integration across time.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands and maybe even tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics, and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.
Exploring feedforward and feedback communication between visual cortical areas with DLAG
Technological advances have increased the availability of recordings from large populations of neurons across multiple brain areas. Coupling these recordings with dimensionality reduction techniques, recent work has led to new proposals for how populations of neurons can send and receive signals selectively and flexibly. Advancement of these proposals depends, however, on untangling the bidirectional, parallel communication between neuronal populations. Because our current data analytic tools struggle to achieve this task, we have recently validated and presented a novel dimensionality reduction framework: DLAG, or Delayed Latents Across Groups. DLAG decomposes the time-varying activity in each area into within- and across-area latent variables. Across-area variables can be decomposed further into feedforward and feedback components using automatically estimated time delays. In this talk, I will review the DLAG framework. Then I will discuss new insights into the moment-by-moment nature of feedforward and feedback communication between visual cortical areas V1 and V2 of macaque monkeys. Overall, this work lays the foundation for dissecting the dynamic flow of signals across populations of neurons, and how it might change across brain areas and behavioral contexts.
Cortical and subcortical grey matter micro-structure is associated with polygenic risk for schizophrenia
Background: Recent discovery of hundreds of common gene variants associated with schizophrenia has enabled polygenic risk scores (PRS) to be measured in the population. It is hypothesized that normal variation in genetic risk of schizophrenia should be associated with MRI changes in brain morphometry and tissue composition. Methods: We used the largest extant genome-wide association dataset (N = 69,369 cases and N = 236,642 healthy controls) to measure PRS for schizophrenia in a large sample of adults from the UK Biobank (Nmax = 29,878) who had multiple micro- and macro-structural MRI metrics measured at each of 180 cortical areas and seven subcortical structures. Linear mixed effect models were used to investigate associations between schizophrenia PRS and brain structure at global and regional scales, controlled for multiple comparisons. Results: Micro-structural phenotypes were more robustly associated with schizophrenia PRS than macro-structural phenotypes. Polygenic risk was significantly associated with reduced neurite density index (NDI) at global brain scale, at 149 cortical regions, and five subcortical structures. Other micro-structural parameters, e.g., fractional anisotropy, that were correlated with NDI were also significantly associated with schizophrenia PRS. Genetic effects on multiple MRI phenotypes were co-located in temporal, cingulate and prefrontal cortical areas, insula, and hippocampus. (Preprint: https://www.medrxiv.org/content/10.1101/2021.02.06.21251073v1)
Understanding sensorimotor control at global and local scales
The brain is remarkably flexible, and appears to instantly reconfigure its processing depending on what’s needed to solve a task at hand: fMRI studies indicate that distal brain areas appear to fluidly couple and decouple with one another depending on behavioral context. But the structural architecture of the brain is comprised of long-range axonal projections that are relatively fixed by adulthood. How does the global dynamism evident in fMRI recordings manifest at a cellular level? To bridge the gap between the activity of single neurons and cortex-wide networks, we correlated electrophysiological recordings of individual neurons in primary visual (V1) and retrosplenial (RSP) associational cortex with activity across dorsal cortex, recorded simultaneously using widefield calcium imaging. We found that individual neurons in both cortical areas independently engaged in different distributed cortical networks depending on the animal’s behavioral state, suggesting that locomotion puts cortex into a more sensory driven mode relevant for navigation.
Time is of the essence: active sensing in natural vision reveals novel mechanisms of perception
n natural vision, active vision refers to the changes in visual input resulting from self-initiated eye movements. In this talk, I will present studies that show that the stimulus-related activity during active vision differs substantially from that occurring during classical flashed-stimuli paradigms. Our results uncover novel and efficient mechanisms that improve visual perception. In a general way, the nervous system appears to engage in sensory modulation mechanisms, precisely timed to self-initiated stimulus changes, thus coordinating neural activity across different cortical areas and serving as a general mechanism for the global coordination of visual perception.
Experience dependent changes of sensory representation in the olfactory cortex
Sensory representations are typically thought as neuronal activity patterns that encode physical attributes of the outside world. However, increasing evidence is showing that as animals learned the association between a sensory stimulus and its behavioral relevance, stimulus representation in sensory cortical areas can change. In this seminar I will present recent experiments from our lab showing that the activity in the olfactory piriform cortex (PC) of mice encodes not only odor information, but also non-olfactory variables associated with the behavioral task. By developing an associative olfactory learning task, in which animals learn to associate a particular context with an odor and a reward, we were able to record the activity of multiple neurons as the animal runs in a virtual reality corridor. By analyzing the population activity dynamics using Principal Components Analysis, we find different population trajectories evolving through time that can discriminate aspects of different trial types. By using Generalized Linear Models we further dissected the contribution of different sensory and non-sensory variables to the modulation of PC activity. Interestingly, the experiments show that variables related to both sensory and non-sensory aspects of the task (e.g., odor, context, reward, licking, sniffing rate and running speed) differently modulate PC activity, suggesting that the PC adapt odor processing depending on experience and behavior.
Contextual modulation of cortical processing by a higher-order thalamic input
Higher-order thalamic nuclei have extensive connections with various cortical areas. Yet their functionals roles remain not well understood. In our recent studies, using optogenetic and chemogenetic tools we manipulated the activity of a higher-order thalamic nucleus, the lateral posterior nucleus (LP, analogous to the primate pulvinar nucleus) and its projections and examined the effects on sensory discrimination and information processing functions in the cortex. We found an overall suppressive effect on layer 2/3 pyramidal neurons in the cortex, resulting in enhancements of sensory feature selectivities. These mechanisms are in place in contextual modulation of cortical processing, as well as in cross-modality modulation of sensory processing.
Understanding sensorimotor control at global and local scales
The brain is remarkably flexible, and appears to instantly reconfigure its processing depending on what’s needed to solve a task at hand: fMRI studies indicate that distal brain areas appear to fluidly couple and decouple with one another depending on behavioral context. We investigated how the brain coordinates its activity across areas to inform complex, top-down control behaviors. Animals were trained to perform a novel brain machine interface task to guide a visual cursor to a reward zone, using activity recorded with widefield calcium imaging. This allowed us to screen for cortical areas implicated in causal neural control of the visual object. Animals could decorrelate normally highly-correlated areas to perform the task, and used an explore-exploit search in neural activity space to discover successful strategies. Higher visual and parietal areas were more active during the task in expert animals. Single unit recordings targeted to these areas indicated that the sensory representation of an object was sensitive to an animal’s subjective sense of controlling it.
A new computational framework for understanding vision in our brain
Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts. This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by- synthesis, should query for additional information and be mainly directed at the foveal region. Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions.
Increase in dimensionality and sparsification of neural activity over development across diverse cortical areas
Bernstein Conference 2024
Dynamic causal communication channels between neocortical areas
COSYNE 2022
Fitting recurrent spiking network models to study the interaction between cortical areas
COSYNE 2022
Fitting recurrent spiking network models to study the interaction between cortical areas
COSYNE 2022
A manifold of heterogeneous vigilance states across cortical areas
COSYNE 2022
A manifold of heterogeneous vigilance states across cortical areas
COSYNE 2022
Dissecting multi-population interactions across cortical areas and layers
COSYNE 2023
A manifold of heterogeneous vigilance states across cortical areas
COSYNE 2023
Spontaneous neural fluctuations and traveling waves are coordinated topographically across cortical and subcortical areas
COSYNE 2023
Flexibility of signaling across and within visual cortical areas V1 and V2
COSYNE 2025
Structural organization of inhibitory neurons is preserved across species and cortical areas
COSYNE 2025
Frequency- and layer-specific effects of high-frequency STN stimulation on mouse motor cortical areas in vivo
FENS Forum 2024
Hierarchical organization of multivariate spiking statistics across cortical areas
FENS Forum 2024
Impact of inter-areal connectivity on sensory processing in a biophysically-detailed model of two interacting cortical areas
FENS Forum 2024
Learning-associated reconfiguration of neuronal activity within and across neocortical areas
FENS Forum 2024