Latent Space
latent space
Universal function approximation in balanced spiking networks through convex-concave boundary composition
The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.
In pursuit of a universal, biomimetic iBCI decoder: Exploring the manifold representations of action in the motor cortex
My group pioneered the development of a novel intracortical brain computer interface (iBCI) that decodes muscle activity (EMG) from signals recorded in the motor cortex of animals. We use these synthetic EMG signals to control Functional Electrical Stimulation (FES), which causes the muscles to contract and thereby restores rudimentary voluntary control of the paralyzed limb. In the past few years, there has been much interest in the fact that information from the millions of neurons active during movement can be reduced to a small number of “latent” signals in a low-dimensional manifold computed from the multiple neuron recordings. These signals can be used to provide a stable prediction of the animal’s behavior over many months-long periods, and they may also provide the means to implement methods of transfer learning across individuals, an application that could be of particular importance for paralyzed human users. We have begun to examine the representation within this latent space, of a broad range of behaviors, including well-learned, stereotyped movements in the lab, and more natural movements in the animal’s home cage, meant to better represent a person’s daily activities. We intend to develop an FES-based iBCI that will restore voluntary movement across a broad range of motor tasks without need for intermittent recalibration. However, the nonlinearities and context dependence within this low-dimensional manifold present significant challenges.
Do deep learning latent spaces resemble human brain representations?
In recent years, artificial neural networks have demonstrated human-like or super-human performance in many tasks including image or speech recognition, natural language processing (NLP), playing Go, chess, poker and video-games. One remarkable feature of the resulting models is that they can develop very intuitive latent representations of their inputs. In these latent spaces, simple linear operations tend to give meaningful results, as in the well-known analogy QUEEN-WOMAN+MAN=KING. We postulate that human brain representations share essential properties with these deep learning latent spaces. To verify this, we test whether artificial latent spaces can serve as a good model for decoding brain activity. We report improvements over state-of-the-art performance for reconstructing seen and imagined face images from fMRI brain activation patterns, using the latent space of a GAN (Generative Adversarial Network) model coupled with a Variational AutoEncoder (VAE). With another GAN model (BigBiGAN), we can decode and reconstruct natural scenes of any category from the corresponding brain activity. Our results suggest that deep learning can produce high-level representations approaching those found in the human brain. Finally, I will discuss whether these deep learning latent spaces could be relevant to the study of consciousness.
A machine learning way to analyse white matter tractography streamlines / Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI
1. Embedding is all you need: A machine learning way to analyse white matter tractography streamlines - Dr Shenjun Zhong, Monash Biomedical Imaging Embedding white matter streamlines with various lengths into fixed-length latent vectors enables users to analyse them with general data mining techniques. However, finding a good embedding schema is still a challenging task as the existing methods based on spatial coordinates rely on manually engineered features, and/or labelled dataset. In this webinar, Dr Shenjun Zhong will discuss his novel deep learning model that identifies latent space and solves the problem of streamline clustering without needing labelled data. Dr Zhong is a Research Fellow and Informatics Officer at Monash Biomedical Imaging. His research interests are sequence modelling, reinforcement learning and federated learning in the general medical imaging domain. 2. Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI - Dr Kamlesh Pawar, Monash Biomedical imaging Magnetic Resonance Imaging (MRI) is a widely used imaging modality in clinics and research. Although MRI is useful it comes with an overhead of longer scan time compared to other medical imaging modalities. The longer scan times also make patients uncomfortable and even subtle movements during the scan may result in severe motion artifact in the images. In this seminar, Dr Kamlesh Pawar will discuss how artificial intelligence techniques can reduce scan time and correct motion artifacts. Dr Pawar is a Research Fellow at Monash Biomedical Imaging. His research interest includes deep learning, MR physics, MR image reconstruction and computer vision.
Decoding of Chemical Information from Populations of Olfactory Neurons
Information is represented in the brain by the coordinated activity of populations of neurons. Recent large-scale neural recording methods in combination with machine learning algorithms are helping understand how sensory processing and cognition emerge from neural population activity. This talk will explore the most popular machine learning methods used to gather meaningful low-dimensional representations from higher-dimensional neural recordings. To illustrate the potential of these approaches, Pedro will present his research in which chemical information is decoded from the olfactory system of the mouse for technological applications. Pedro and co-researchers have successfully extracted odor identity and concentration from olfactory receptor neuron low-dimensional activity trajectories. They have further developed a novel method to identify a shared latent space that allowed decoding of odor information across animals.
Navigating through the Latent Spaces in Generative Models
Bernstein Conference 2024
Decoding sleep patterns: Unraveling temazepam impact through BENDR encoder and its latent space analysis
FENS Forum 2024