← Back

Dimensionality Reduction

Topic spotlight
TopicWorld Wide

dimensionality reduction

Discover seminars, jobs, and research tagged with dimensionality reduction across World Wide.
15 curated items7 Seminars7 ePosters1 Position
Updated 1 day ago
15 items · dimensionality reduction
15 results
PositionNeuroscience

Peter C. Petersen

Department of Neuroscience, University of Copenhagen
University of Copenhagen, Blegdamsvej 3B, building 33.3.52. 2200 Copenhagen, Denmark
Dec 5, 2025

The postdoc position is focused on the development of BrainSTEM, a web application designed as an electronic lab notebook for describing neurophysiological experiments as well as a data-sharing platform for the community. The role involves the design of a standard language for describing experimental neuroscience, semantic search functionality, stronger adoption of the FAIR principles, and stimulating and supporting community uptake. The project is primarily funded by the NIH, through the Brain Initiative U19 Oxytocin grant. The project will include occasional travels, e.g., to New York (NYU), Brain Initiate meetings, SfN, FENS, and to pilot user labs.

SeminarNeuroscience

Dimensionality reduction beyond neural subspaces

Alex Cayco Gajic
École Normale Supérieure
Jan 28, 2025

Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 2, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscienceRecording

Predictive modeling, cortical hierarchy, and their computational implications

Choong-Wan Woo & Seok-Jun Hong
Sungkyunkwan University
Jan 16, 2023

Predictive modeling and dimensionality reduction of functional neuroimaging data have provided rich information about the representations and functional architectures of the human brain. While these approaches have been effective in many cases, we will discuss how neglecting the internal dynamics of the brain (e.g., spontaneous activity, global dynamics, effective connectivity) and its underlying computational principles may hinder our progress in understanding and modeling brain functions. By reexamining evidence from our previous and ongoing work, we will propose new hypotheses and directions for research that consider both internal dynamics and the computational principles that may govern brain processes.

SeminarNeuroscience

Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties

SueYeon Chung
NYU/Flatiron
Sep 15, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

SeminarNeuroscienceRecording

Exploring feedforward and feedback communication between visual cortical areas with DLAG

Evren Gokcen
Yu lab, Carnegie Mellon University
Mar 23, 2021

Technological advances have increased the availability of recordings from large populations of neurons across multiple brain areas. Coupling these recordings with dimensionality reduction techniques, recent work has led to new proposals for how populations of neurons can send and receive signals selectively and flexibly. Advancement of these proposals depends, however, on untangling the bidirectional, parallel communication between neuronal populations. Because our current data analytic tools struggle to achieve this task, we have recently validated and presented a novel dimensionality reduction framework: DLAG, or Delayed Latents Across Groups. DLAG decomposes the time-varying activity in each area into within- and across-area latent variables. Across-area variables can be decomposed further into feedforward and feedback components using automatically estimated time delays. In this talk, I will review the DLAG framework. Then I will discuss new insights into the moment-by-moment nature of feedforward and feedback communication between visual cortical areas V1 and V2 of macaque monkeys. Overall, this work lays the foundation for dissecting the dynamic flow of signals across populations of neurons, and how it might change across brain areas and behavioral contexts.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

SeminarNeuroscienceRecording

The geometry of abstraction in artificial and biological neural networks

Stefano Fusi
Columbia University
Jun 10, 2020

The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.

ePoster

A biological model of nonlinear dimensionality reduction

Kensuke Yoshida, Taro Toyoizumi

Bernstein Conference 2024

ePoster

Dimensionality reduction beyond neural subspaces

N. Alex Cayco-Gajic

Bernstein Conference 2024

ePoster

Topological maps are for robust and wiring-efficient dimensionality reduction

Nicola Mendini, Michael Mangan, Stuart Wilson

Bernstein Conference 2024

ePoster

Sparse Component Analysis: An interpretable dimensionality reduction tool that identifies building blocks of neural computation

Joshua Glaser, Andrew Zimnik, Vladislav Susoy, Liam Paninski, John Cunningham, Mark Churchland

COSYNE 2023

ePoster

Dimensionality reduction in Stroke Patients Neuroimaging Data

Sebastian Idesis

Neuromatch 5

ePoster

Nonlinear Hebbian plasticity for dimensionality reduction

Ivan Bulygin

Neuromatch 5

ePoster

Sparse Component Analysis: An interpretable dimensionality reduction tool that identifies building blocks of neural computation

Joshua Glaser

Neuromatch 5