Topic spotlight
TopicWorld Wide

pixel

Discover seminars, jobs, and research tagged with pixel across World Wide.
17 curated items14 Seminars3 ePosters
Updated about 1 month ago
17 items · pixel
17 results
SeminarNeuroscience

Spike train structure of cortical transcriptomic populations in vivo

Kenneth Harris
UCL, UK
Oct 28, 2025

The cortex comprises many neuronal types, which can be distinguished by their transcriptomes: the sets of genes they express. Little is known about the in vivo activity of these cell types, particularly as regards the structure of their spike trains, which might provide clues to cortical circuit function. To address this question, we used Neuropixels electrodes to record layer 5 excitatory populations in mouse V1, then transcriptomically identified the recorded cell types. To do so, we performed a subsequent recording of the same cells using 2-photon (2p) calcium imaging, identifying neurons between the two recording modalities by fingerprinting their responses to a “zebra noise” stimulus and estimating the path of the electrode through the 2p stack with a probabilistic method. We then cut brain slices and performed in situ transcriptomics to localize ~300 genes using coppaFISH3d, a new open source method, and aligned the transcriptomic data to the 2p stack. Analysis of the data is ongoing, and suggests substantial differences in spike time coordination between ET and IT neurons, as well as between transcriptomic subtypes of both these excitatory types.

SeminarNeuroscience

Neuronal population interactions between brain areas

Byron Yu
Carnegie Mellon University
Dec 7, 2023

Most brain functions involve interactions among multiple, distinct areas or nuclei. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Using a population approach, we found that interactions between early visual cortical areas (V1 and V2) occur through a low-dimensional bottleneck, termed a communication subspace. In this talk, I will focus on the statistical methods we have developed for studying interactions between brain areas. First, I will describe Delayed Latents Across Groups (DLAG), designed to disentangle concurrent, bi-directional (i.e., feedforward and feedback) interactions between areas. Second, I will describe an extension of DLAG applicable to three or more areas, and demonstrate its utility for studying simultaneous Neuropixels recordings in areas V1, V2, and V3. Our results provide a framework for understanding how neuronal population activity is gated and selectively routed across brain areas.

SeminarOpen SourceRecording

OpenSFDI: an open hardware project for label-free measurements of tissue optical properties with spatial frequency domain imaging

Darren Roblyer
Boston University
Jun 27, 2023

Spatial frequency domain imaging (SFDI) is a diffuse optical measurement technique that can quantify tissue optical absorption and reduced scattering on a pixel by-pixel basis. Measurements of absorption at different wavelengths enable the extraction of molar concentrations of tissue chromophores over a wide field, providing a noncontact and label-free means to assess tissue viability, oxygenation, microarchitecture, and molecular content. In this talk, I will describe openSFDI, an open-source guide for building a low-cost, small-footprint, multi-wavelength SFDI system capable of quantifying absorption and reduced scattering as well as oxyhemoglobin and deoxyhemoglobin concentrations in biological tissue. The openSFDI project has a companion website which provides a complete parts list along with detailed instructions for assembling the openSFDI system. I will also review several technological advances our lab has recently made, including the extension of SFDI to the shortwave infrared wavelength band (900-1300 nm), where water and lipids provide strong contrast. Finally, I will discuss several preclinical and clinical applications for SFDI, including applications related to cancer, dermatology, rheumatology, cardiovascular disease, and others.

SeminarNeuroscience

A specialized role for entorhinal attractor dynamics in combining path integration and landmarks during navigation

Malcolm Campbell
Harvard
Mar 8, 2023

During navigation, animals estimate their position using path integration and landmarks. In a series of two studies, we used virtual reality and electrophysiology to dissect how these inputs combine to generate the brain’s spatial representations. In the first study (Campbell et al., 2018), we focused on the medial entorhinal cortex (MEC) and its set of navigationally-relevant cell types, including grid cells, border cells, and speed cells. We discovered that attractor dynamics could explain an array of initially puzzling MEC responses to virtual reality manipulations. This theoretical framework successfully predicted both MEC grid cell responses to additional virtual reality manipulations, as well as mouse behavior in a virtual path integration task. In the second study (Campbell*, Attinger* et al., 2021), we asked whether these principles generalize to other navigationally-relevant brain regions. We used Neuropixels probes to record thousands of neurons from MEC, primary visual cortex (V1), and retrosplenial cortex (RSC). In contrast to the prevailing view that “everything is everywhere all at once,” we identified a unique population of MEC neurons, overlapping with grid cells, that became active with striking spatial periodicity while head-fixed mice ran on a treadmill in darkness. These neurons exhibited unique cue-integration properties compared to other MEC, V1, or RSC neurons: they remapped more readily in response to conflicts between path integration and landmarks; they coded position prospectively as opposed to retrospectively; they upweighted path integration relative to landmarks in conditions of low visual contrast; and as a population, they exhibited a lower-dimensional activity structure. Based on these results, our current view is that MEC attractor dynamics play a privileged role in resolving conflicts between path integration and landmarks during navigation. Future work should include carefully designed causal manipulations to rigorously test this idea, and expand the theoretical framework to incorporate notions of uncertainty and optimality.

SeminarNeuroscienceRecording

Genetic-based brain machine interfaces for visual restoration

Serge Picaud
Institute Vision Paris
Apr 12, 2022

Visual restoration is certainly the greatest challenge for brain-machine interfaces with the high pixel number and high refreshing rate. In the recent year, we brought retinal prostheses and optogenetic therapy up to successful clinical trials. Concerning visual restoration at the cortical level, prostheses have shown efficacy for limited periods of time and limited pixel numbers. We are investigating the potential of sonogenetics to develop a non-contact brain machine interface allowing long-lasting activation of the visual cortex. The presentation will introduce our genetic-based brain machine interfaces for visual restoration at the retinal and cortical levels.

SeminarNeuroscienceRecording

NMC4 Short Talk: Novel population of synchronously active pyramidal cells in hippocampal area CA1

Dori Grijseels (they/them)
University of Sussex
Dec 1, 2021

Hippocampal pyramidal cells have been widely studied during locomotion, when theta oscillations are present, and during short wave ripples at rest, when replay takes place. However, we find a subset of pyramidal cells that are preferably active during rest, in the absence of theta oscillations and short wave ripples. We recorded these cells using two-photon imaging in dorsal CA1 of the hippocampus of mice, during a virtual reality object location recognition task. During locomotion, the cells show a similar level of activity as control cells, but their activity increases during rest, when this population of cells shows highly synchronous, oscillatory activity at a low frequency (0.1-0.4 Hz). In addition, during both locomotion and rest these cells show place coding, suggesting they may play a role in maintaining a representation of the current location, even when the animal is not moving. We performed simultaneous electrophysiological and calcium recordings, which showed a higher correlation of activity between the LFO and the hippocampal cells in the 0.1-0.4 Hz low frequency band during rest than during locomotion. However, the relationship between the LFO and calcium signals varied between electrodes, suggesting a localized effect. We used the Allen Brain Observatory Neuropixels Visual Coding dataset to further explore this. These data revealed localised low frequency oscillations in CA1 and DG during rest. Overall, we show a novel population of hippocampal cells, and a novel oscillatory band of activity in hippocampus during rest.

SeminarNeuroscienceRecording

NMC4 Short Talk: Stretching and squeezing of neuronal log firing rate distribution by psychedelic and intrinsic brain state transitions

Bradley Dearnly
University of Sheffield
Dec 1, 2021

How psychedelic drugs change the activity of cortical neuronal populations is not well understood. It is also not clear which changes are specific to transition into the psychedelic brain state and which are shared with other brain state transitions. Here, we used Neuropixels probes to record from large populations of neurons in prefrontal cortex of mice given the psychedelic drug TCB-2. The primary effect of drug ingestion was stretching of the distribution of log firing rates of the recorded population. This phenomenon was previously observed across transitions between sleep and wakefulness, which prompted us to examine how common it is. We found that modulation of the width of the log-rate distribution of a neuronal population occurred in multiple areas of the cortex and in the hippocampus even in awake drug-free mice, driven by intrinsic fluctuations in their arousal level. Arousal, however, did not explain the stretching of the log-rate distribution by TCB-2. In both psychedelic and intrinsically occurring brain state transitions, the stretching or squeezing of the log-rate distribution of an entire neuronal population were the result of a more close overlap between log-rate distributions of the upregulated and downregulated subpopulations in one brain state compared to the other brain state. Often, we also observed that the log-rate distribution of the downregulated subpopulation was stretched, whereas the log-rate distribution of the upregulated subpopulation was squeezed. In both subpopulations, the stretching and squeezing were a signature of a greater relative impact of the brain state transition on the rates of the slow-firing neurons. These findings reveal a generic pattern of reorganisation of neuronal firing rates by different kinds of brain state transitions.

SeminarNeuroscienceRecording

StereoSpike: Depth Learning with a Spiking Neural Network

Ulysse Rancon
University of Bordeaux
Nov 1, 2021

Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to train StereoSpike in a supervised manner, using surrogate gradient descent. We propose a novel readout paradigm to obtain a dense analog prediction –the depth of each pixel– from the spikes of the decoder. We demonstrate that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy. To the best of our knowledge, it is the first time that such a large-scale regression problem is solved by a fully spiking network. Finally, we show that low firing rates (<10%) can be obtained via regularization, with a minimal cost in accuracy. This means that StereoSpike could be implemented efficiently on neuromorphic chips, opening the door for low power real time embedded systems.

SeminarNeuroscienceRecording

Large-scale approaches for distributed circuits underlying visual decision-making

Nick Steinmetz
University of Washington
Oct 10, 2021

Mammalian vision and visually-guided behavior relies on neurons distributed across diverse brain regions. In this talk I will describe our efforts to create tools that allow us to measure activity from these distributed circuits - Neuropixels probes for large-scale electrophysiology - and our findings from studies deploying these tools to study visual detection and discrimination in mice.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 26, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarNeuroscience

Advancements in multielectrode recording techniques in neurophysiology: from wire probes to neuropixels

Sylvia Schröder
University of Sussex
Aug 11, 2021

Join us for a comprehensive introduction to multielectrode recording technologies for in vivo neurophysiology. Whether you are new to the field or have experience with one type of technology, this webinar will provide you with information about a variety of technologies, with a main focus on Neuropixels probes. Dr Kris Schoepfer, US Product Specialist at Scientifica, will provide an overview of multielectrode technologies available to record from one or more brain areas simultaneously, including: DIY multielectrode probes; Tetrodes / Hyperdrives; Silicon probes; Neuropixels. Dr Sylvia Schröder, University of Sussex, will delve deeper into the advantages of Neuropixels, highlighting the value of channel depth and the types of new biological insights that can be explored thanks to the advancements this technology brings. Presenting exciting data from the optic tract and superior colliculus, Sylvia will also discuss how Neuropixels recordings can be combined with optogenetics, and how histology can be used to identify the location of probes.

SeminarPsychology

Exploring perceptual similarity and its relation to image-based spaces: an effect of familiarity

Rosyl Somai
University of Stirling
Aug 11, 2021

One challenge in exploring the internal representation of faces is the lack of controlled stimuli transformations. Researchers are often limited to verbalizable transformations in the creation of a dataset. An alternative approach to verbalization for interpretability is finding image-based measures that allow us to quantify image transformations. In this study, we explore whether PCA could be used to create controlled transformations to a face by testing the effect of these transformations on human perceptual similarity and on computational differences in Gabor, Pixel and DNN spaces. We found that perceptual similarity and the three image-based spaces are linearly related, almost perfectly in the case of the DNN, with a correlation of 0.94. This provides a controlled way to alter the appearance of a face. In experiment 2, the effect of familiarity on the perception of multidimensional transformations was explored. Our findings show that there is a positive relationship between the number of components transformed and both the perceptual similarity and the same three image-based spaces used in experiment 1. Furthermore, we found that familiar faces are rated more similar overall than unfamiliar faces. That is, a change to a familiar face is perceived as making less difference than the exact same change to an unfamiliar face. The ability to quantify, and thus control, these transformations is a powerful tool in exploring the factors that mediate a change in perceived identity.

SeminarNeuroscienceRecording

Zero-shot visual reasoning with probabilistic analogical mapping

Taylor Webb
UCLA
Jun 30, 2021

There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.

SeminarNeuroscienceRecording

A no-report paradigm reveals that face cells multiplex consciously perceived and suppressed stimuli

Janis Hesse
California Institute of Technology
Feb 25, 2021

Having conscious experience is arguably the most important reason why it matters to us whether we are alive or dead. A powerful paradigm to identify neural correlates of consciousness is binocular rivalry, wherein a constant visual stimulus evokes a varying conscious percept. It has recently been suggested that activity modulations observed during rivalry may represent the act of report rather than the conscious percept itself. Here, we performed single-unit recordings from face patches in macaque inferotemporal (IT) cortex using a novel no-report paradigm in which the animal’s conscious percept was inferred from eye movements. These experiments reveal two new results concerning the neural correlates of consciousness. First, we found that high proportions of IT neurons represented the conscious percept even without active report. Using high-channel recordings, including a new 128-channel Neuropixels-like probe, we were able to decode the conscious percept on single trials. Second, we found that even on single trials, modulation to rivalrous stimuli was weaker than that to unambiguous stimuli, suggesting that cells may encode not only the conscious percept but also the suppressed stimulus. To test this hypothesis, we varied the identity of the suppressed stimulus during binocular rivalry; we found that indeed, we could decode not only the conscious percept but also the suppressed stimulus from neural activity. Moreover, the same cells that were strongly modulated by the conscious percept also tended to be strongly modulated by the suppressed stimulus. Together, our findings indicate that (1) IT cortex possesses a true neural correlate of consciousness even in the absence of report, and (2) this correlate consists of a population code wherein single cells multiplex representation of the conscious percept and veridical physical stimulus, rather than a subset of cells perfectly reflecting consciousness.

ePoster

Density-based Neural Decoding using Spike Localization for Neuropixels Recordings

Yizi Zhang, Tianxiao He, Julien Boussard, Cole Hurwitz, Erdem Varol, Charlie Windolf, Olivier Winter, Matt Whiteway, The International Brain Lab The International Brain Lab, Liam Paninski

COSYNE 2023

ePoster

Human-like behavior and neural representations emerge in a neural network trained to overtly search for objects in natural scenes from pixels

Motahareh Pourrahimi, Irina Rish, Pouya Bashivan

COSYNE 2025

ePoster

Power Pixels: A Python-based pipeline for processing of Neuropixels recordings

Jeroen Bos, Guido T Meijer, Francesco P Battaglia

FENS Forum 2024