All Content
37 items
Date
Oct 14, 2026
Date
Sep 9, 2026
Date
Jun 10, 2026
Date
May 13, 2026
Adventures in Spin Labeling: Clinical Perfusion Imaging and the Path to Technical Innovation
Divya Bolar· University of California San Diego
Arterial spin labeling (ASL) MRI has become a vital tool in clinical neuroimaging, enabling noninvasive assessment of cerebral perfusion across a range of conditions including stroke, vascular malformations, and brain tumors. With broader clinical adoption, its practical strengths — as well as important limitations — have become increasingly clear.
Date
Apr 24, 2026
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. We especially welcome applicants who develop mathematical approaches, computational models, and machine learning methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron’s lab investigates how the cellular and synaptic circuitry of neuronal circuits supports the complex dynamics and computations that are routinely observed in the brain. Jorge Jaramillo’s lab investigates how subcortical structures interact with cortical circuits to subserve cognitive processes such as memory, attention, and decision making. Ramon Nogueira’s lab investigates the geometry of representations as the computational support of cognitive processes like abstraction in noisy artificial and biological neural networks. Marcella Noorman’s lab investigates how properties of synapses, neurons, and circuits shape the neural dynamics that enable flexible and efficient computation. Samuel Muscinelli’s lab studies how the anatomy of brain circuits both governs learning and adapts to it. We combine analytical theory, machine learning, and data analysis, in close collaboration with experimentalists. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Neuroscience Institute, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and UChicago’s Data Science Institute. The Neuroscience Institute is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.
Date
Apr 24, 2026
Striatal activity in natural behavior
Henry Yin & Eric Yttri· Duke University Resp. Carnegie Mellon University
Date
Mar 20, 2026
Honorary Lecture 2026
Glenda Halliday & Maria Grazia Spillantini· University of Sydney Resp. University of Cambridge
Date
Feb 27, 2026
Decoding stress vulnerability
Stamatina Tzanoulinou· University of Lausanne, Faculty of Biology and Medicine, Department of Biomedical Sciences
Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.
Date
Feb 20, 2026
Predictive Coding Light
Prof. Dr. Jochen Triesch· FIAS Frankfurt Institute for Advanced Studies
Current machine learning systems consume vastly more energy than biological brains. Neuromorphic systems aim to overcome this difference by mimicking the brain’s information coding via discrete voltage spikes. However, it remains unclear how both artificial and natural networks of spiking neurons can learn energy-efficient information processing strategies. Here we propose Predictive Coding Light (PCL), a recurrent hierarchical spiking neural network for unsupervised representation learning. In contrast to previous predictive coding approaches, PCL does not transmit prediction errors to higher processing stages. Instead, it suppresses the most predictable spikes and transmits a compressed representation of the input. Using only biologically plausible spike-timing based learning rules, PCL reproduces a wealth of findings on information processing in visual cortex and permits strong performance in downstream classification tasks. Overall, PCL offers a new approach to predictive coding and its implementation in natural and artificial spiking neural networks
Date
Feb 11, 2026
Date
Jan 21, 2026
sensorimotor control, mouvement, touch, EEG
Marieva Vlachou· Institut des Sciences du Mouvement Etienne Jules Marey, Aix-Marseille Université/CNRS, France
Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.
Date
Dec 19, 2025
Over the last 20 years, neuroimaging and electrophysiology techniques have become central to understanding the mechanisms that accompany loss and recovery of consciousness. Much of this research is performed in the context of healthy individuals with neurotypical brain dynamics. Yet, a true understanding of how consciousness emerges from the joint action of neurons has to account for how severely pathological brains, often showing phenotypes typical of unconsciousness, can nonetheless generate a subjective viewpoint. In this presentation, I will start from the context of Disorders of Consciousness and will discuss recent work aimed at finding generalizable signatures of consciousness that are reliable across a spectrum of brain electrophysiological phenotypes focusing in particular on the notion of edge-of-chaos criticality.
Date
Dec 13, 2025
Computational Mechanisms of Predictive Processing in Brains and Machines
Dr. Antonino Greco· Hertie Institute for Clinical Brain Research, Germany
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Date
Dec 10, 2025
The Nature versus Nurture debate has generally been considered from the lens of genome versus experience dichotomy and has dominated our thinking about behavioral individuality and personality traits. In contrast, the role of nonheritable noise during brain development in behavioral variation is understudied. Using the Drosophila melanogaster visual system, I will discuss our efforts to dissect how individuality in circuit wiring emerges during development, and how that helps generate individual behavioral variation.
Date
Dec 10, 2025
A human stem cell-derived organoid model of the trigeminal ganglion
Oliver Harschnitz· Human Technopole, Milan, Italy
Date
Dec 8, 2025
Choice between methamphetamine and food is modulated by reinforcement interval and central drug metabolism
Marlaina Stocco· Western University
Date
Dec 4, 2025
High Stakes in the Adolescent Brain: Glia Ignite Under THC’s Influence
Yalin Sun· University of Toronto
Date
Dec 4, 2025
Prefrontal-thalamic goal-state coding segregates navigation episodes into spatially consistent parallel hippocampal maps
Hiroshi Ito· University of Lausanne
Date
Dec 1, 2025
Microglia regulate remyelination via inflammatory phenotypic polarization in CNS demyelinating disorders
Athena Boutou· Hellenic Pasteur Institute
Date
Nov 13, 2025
Top-down control of neocortical threat memory
Prof. Dr. Johannes Letzkus· Universität Freiburg, Germany
Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.
Date
Nov 12, 2025
COSYNE 2025
The COSYNE 2025 conference was held in Montreal with post-conference workshops in Mont-Tremblant, continuing to provide a premier forum for computational and systems neuroscience. Attendees exchanged cutting-edge research in a single-track main meeting and in-depth specialized workshops, reflecting Cosyne’s mission to understand how neural systems function.
Date
Mar 27, 2025
Bernstein Conference 2024
Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange. Bernstein Conference 2024, held in Frankfurt am Main, featured discussions, keynote lectures, and poster sessions, and has established itself as one of the most renowned conferences worldwide in this field.
Date
Sep 29, 2024
FENS Forum 2024
Organised by FENS in partnership with the Austrian Neuroscience Association and the Hungarian Neuroscience Society, the FENS Forum 2024 will take place on 25–29 June 2024 in Vienna, Austria. The FENS Forum is Europe’s largest neuroscience congress, covering all areas of neuroscience from basic to translational research.
Date
Jun 25, 2024
Wake-like Skin Patterning and Neural Activity During Octopus Sleep
Tomoyuki Mano, Aditi Pophale, Kazumichi Shimizu, Teresa Iglesias, Kerry Martin, Makoto Hiroi, Keishu Asada, Paulette García Andaluz, Thi Thu Van Dinh, Leenoy Meshulam, Sam Reiter
While sleeping, many vertebrate groups alternate between at least two sleep stages: rapid eye movement (REM) and slow wave sleep (SWS), in part characterized by wake-like and synchronous brain activity respectively. Sleep stage alternation has been implicated in learning and memory function experimentally1, and has motivated several techniques in training artificial neural networks2. If the functions ascribed to 2-stage sleep are truly general, one might expect to find similar phenomena outside the vertebrate lineage. Here we delineate neural and behavioral correlates of 2-stage sleep in octopuses, marine invertebrates which evolutionarily diverged from vertebrates ~550 MYA and have independently evolved large brains and behavioral sophistication. Octopus sleep is rhythmically interrupted by ~60 second bouts of pronounced body movements and rapid changes in their neurally controlled skin patterns. We show that this constitutes a distinct ‘active’ sleep stage, being homeostatically regulated, rapidly reversible, and coming with increased arousal threshold. Neuropixels recordings from the octopus central brain reveal that local field potential (LFP) activity during active sleep resembles that of waking. LFP activity differs across brain regions, with the strongest activity during active sleep seen in the Superior Frontal and Vertical lobes, anatomically connected regions associated with learning and memory function. During ‘quiet’ sleep, these regions are relatively silent but generate LFP oscillations resembling mammalian sleep spindles in frequency and duration. Computational analysis reveals the rich skin pattern dynamics of active sleep, which move through states strongly resembling waking skin patterns. The range of similarities with vertebrates implies that aspects of 2-stage sleep in octopuses may represent convergent features of complex cognition.
Date
Mar 12, 2023
Visuomotor Association Orthogonalizes Visual Cortical Population Codes
Samuel Failor, Matteo Carandini, Kenneth Harris
Stimuli trigger a pattern of activity across neurons in cortex, whose firing rates define a stimulus's representation in a high-dimensional vector space. Learning a visuomotor task can affect the responses of visual cortical neurons, but how and why training modifies population-level representations is unclear. One hypothesis is that representational plasticity in visual cortex facilitates visuomotor associations by downstream motor systems. Learning systems exhibit "inductive biases", meaning they form some stimulus-motor associations more easily than others. An animal's inductive biases presumably reflect its neuronal representations; its ability to form distinct motor associations for different stimuli depends on the representational similarity of the stimuli. Thus, the plasticity of sensory cortical representations may change inductive bias: for an animal to make different associations to two stimuli, the cortical representations of the stimuli must differentiate, such as if the evoked firing vectors were orthogonalized. A second hypothesis is that task training increases the fidelity of stimulus coding in sensory cortex, which improves decoding accuracy by downstream regions. However, this hypothesis presupposes that the population code in naive cortex suffers from low fidelity, which recent recordings of large cortical populations have questioned. We used two-photon calcium imaging to study how the tuning of V1 populations changes after mice learn to associate opposing actions with differently oriented gratings. Training did not improve the fidelity of stimulus coding, as it was already perfect in naive animals thanks to a subpopulation of highly reliable neurons. Instead, training caused the population's responses to motor-associated stimuli to become more orthogonal. The basis of this training-evoked orthogonalization was the sparsening of stimulus representations, an effect which could be summarized by a simple nonlinear transformation of naive neuronal firing rates and whose convexity was largest for motor-associated stimuli.
Date
Mar 12, 2023
Visuomotor integration gives rise to three-dimensional receptive fields in the primary visual cortex
Yiran He, Antonin Blot, Petr Znamenskiy
Distinguishing near and far visual cues is an essential computation that animals must carry out to guide behavior using vision. When animals move, self-motion creates motion parallax — an important but poorly understood source of depth information — whereby the speed of optic flow generated by self-motion depends on the depth of visual cues. This enables animals to estimate depth by comparing visual motion and self-motion speeds. As neurons in the mouse primary visual cortex (V1) are broadly modulated by locomotion, we hypothesized that they may integrate visual- and locomotion-related signals to estimate depth from motion parallax. To test this hypothesis, we designed a virtual reality (VR) environment for mice, where visual cues were presented at different virtual distances from the mouse and motion parallax was the only cue for depth, and recorded neuronal activity in V1 using two-photon calcium imaging. We found that the majority of excitatory neurons in layer 2/3 of V1 were selective for virtual depth. Neurons with different depth preferences were spatially intermingled, with nearby cells often tuned for disparate depths. Moreover, depth tuning could not be fully accounted for by either running speed or optic flow speed tuning in isolation, but arose from the integration of both signals. Specifically, depth selectivity of V1 neurons was explained by the ratio of preferred running and optic flow speeds. Finally, many neurons responded selectively to visual stimuli presented at a specific retinotopic location and virtual depth, demonstrating that during active locomotion V1 neuronal responses can be characterized by three-dimensional receptive fields. These results challenge the traditional view of V1 as a feed-forward filter bank, and suggest that the widespread modulation of V1 neurons by locomotion and other movements plays an essential role in estimation of depth from motion parallax.
Date
Mar 12, 2023
Variable syllable context depth in Bengalese finch songs: A Bayesian sequence model
Noémi Éltető, Lena Veit, Avani Koparkar, Peter Dayan
Birdsong is an important model for vocal learning and sequential motor behavior. Similarly to human language, songs, notably those of Bengalese finches and canaries, exhibit higher-order sequence structure, meaning that the statistics of one syllable may depend on a number of previous syllables. However, this number (the context depth) varies in a manner that has challenged previous formal approaches. Here we used a hierarchical non-parametric Bayesian sequence model (based on Teh, 2006; Elteto et al., 2022) that seamlessly combines predictive information from shorter and longer contexts of previous syllables, weighing them proportionally to their predictive power. We fit our model to songs of 8 different Bengalese finches, each with > 300 song bouts (Veit et al., 2021). The model inferred the context depth, showing that it varied substantially, with some syllables depending just on one deterministic predecessor, but others depending on $>10$ previous syllables. Underlying this variability was syllables forming alternating and repeating chunks, i.e. strings of fixed subsequences. When fitted at the chunk-level, our model revealed different chunk-motifs that characterize how bouts typically start, unfold, and end. The model was also able to predict the flexibility with which birds can learn to switch between syllable transitions based on external cues.
Date
Mar 12, 2023
When foraging in dynamic and uncertain environments, animals can benefit from basing their decisions on smart inferences about hidden properties of the world. Typical theoretical approaches for understanding the strategies that animals use in such settings combine Bayesian inference and value iteration to derive optimal behavioral policies that maximize total reward given changing beliefs about the environment. However, specifying these beliefs requires infinite numerical precision; with limited resources, this problem can no longer be decomposed into the separate steps of optimizing inference and optimizing action selection. To understand the space of behavioral policies in this constrained setting, we enumerate and evaluate all possible behavioral programs that can be constructed from just a handful of states. We show that only a small fraction of the top-performing programs can be constructed by approximating Bayesian inference; the remaining programs are structurally or even functionally distinct from Bayesian. To assess structural and functional relationships among all programs, we developed novel tree-embedding algorithms; these embeddings, which are capable of extracting different relational structures within the program space, reveal that nearly all good programs are closely connected through single algorithmic “mutations”. We demonstrate how one can use such relational structures to efficiently search for good solutions via an evolutionary algorithm. Moreover, these embeddings reveal that the diversity of non-Bayesian behaviors originates from a handful of key mutations that broaden the functional repertoire within the space of good programs. The fact that this diversity of non-optimal behavior does not significantly compromise performance suggests that these same strategies might generalize across tasks.
Date
Mar 12, 2023
Parkinson's disease (PD), characterized by the absence of dopamine in the striatum[1], is caused by the death of the substantia nigra pars compacta dopamine (SNcDA) neurons in the mid-brain. The cause of this cell loss is attributed to irreparable damage due to a dysregulation cascade originating from excess cytosolic dopamine[2]. However, it is unresolved if dopamine dysregulation in SNcDA neurons themselves is the cause of PD or if it is a mere symptom. Here, we introduce a theory of specialized non-causal action potentials that serve metabolic homeostasis called `metabolic spikes' which can account for spontaneous activity observed in many neuron types including SNcDA. We propose that loss of these metabolic spikes in SNcDA can account for both, the cause of PD and the subsequent dopamine dysregulation. Neurons, presumably in anticipation of synaptic inputs, keep their ATP levels at a maximum such that they are ATP-surplus/ADP-scarce during synaptic quiescence. With ADP availability as the rate-limiting step, ATP production stalls in their mitochondria when energy consumption is low, leading to the formation of toxic Reactive Oxygen Species(ROS). Under these circumstances, `metabolic spikes’ serve to restore ATP production and relieve ROS toxicity. In a metabolism-coupled model of SNcDA that senses ROS and initiates spikes, we identified three categories of deficits that could decrease metabolic spikes and consequently deplete the dopamine tone seen in PD. Importantly in PD, such lowered extracellular dopamine level is misread by D2-autoreceptors and dopamine synthesis is increased. With dopamine vesicles being already full, excess dopamine produces disruptive aldehyde (DOPAL) leading to dysregulation and ultimately cell death. Metabolic spikes, though relevant for cellular health, may thus be an integrated neuronal mechanism that operates in synergy with synaptic integration and forms a basic principle of network dynamics and behaviour, as exemplified in PD.
Date
Mar 12, 2023
Unifying mechanistic and functional models of cortical circuits with low-rank, E/I-balanced spiking networks
William Podlaski & Christian Machens
Network models are often designed to capture selective aspects of cortical circuits. On one end, mechanistic models such as balanced spiking networks resemble activity regimes observed in data, but are often limited to simple computations. On the other end, functional models like trained deep networks can show comparable performance and dynamical motifs, but are far removed from experimental physiology. Here, we put forth a new framework for excitatory-inhibitory spiking networks which retains key properties of both mechanistic and functional models. Based on previous studies of the geometry of spike-coding networks, we consider a population of spiking neurons with low-rank connectivity, allowing each neuron’s threshold to be cast as a boundary in a space of population modes, or latent variables. Each neuron’s boundary divides this latent space into subthreshold and suprathreshold areas, which determines its contribution to the input-output function of the network. Then, incorporating Dale’s law as a connectivity constraint, we demonstrate how a network of inhibitory (I) neurons forms a convex, stable boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, unstable boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries. The resultant E/I networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical dynamics. Moreover, the latent variables can be mapped onto a constrained optimization problem, and are capable of universal function approximation. The combination of these dynamical and functional properties leads to unique insights, including specified computational roles for E/I balance and Dale’s law. Finally, the intuitive geometry of the representations, plus the link to constrained optimization, makes our framework a promising candidate for scalable and interpretable computation in biologically-plausible spiking networks.
Date
Mar 12, 2023
Tuned inhibition explains strong correlations across segregated excitatory subnetworks
Matthew Getz, Gregory Handy, Alex Negrón, Brent Doiron
Understanding the basis of shared, across trial fluctuations in neural activity in mammalian cortex is critical to uncovering the nature of information processing in the brain. This correlated variability has often been related to the structure of cortical connectivity since variability not accounted for by signal changes likely arises from local circuit inputs. However, recent recordings from segregated networks of excitatory neurons in mouse primary visual cortex (V1) complicate this relationship. These results found that despite weak cross-network connection probability, noise correlations were significantly larger than one would expect. We aim to explore possible circuit mechanisms responsible for these enhanced positive correlations through biologically motivated cortical network models, with the hypothesis that they arise from unobserved inhibitory neurons. In particular, we consider networks with weakly interconnected excitatory populations, but either global or subpopulation-specific inhibitory populations. We then ask how correlations can be enhanced or marred via the strength of outgoing and incoming connections to these inhibitory populations. By performing a pathway expansion of the covariance matrix, we find that a single inhibitory population with sufficiently strong I to E connections can lead to stronger than expected positive correlations across excitatory populations. However, this result is highly parameter dependent. When considering an inhibition-stabilized network (ISN) the viable parameter regime shrinks dramatically into a narrow band close to the edge of stability. We find that both non-ISN and ISN regimes can recover the ability to robustly explain the experimental results by allowing for two tuned inhibitory populations, meaning that each inhibitory population preferentially connects to one of the two excitatory populations. Our results therefore imply that complexity in excitation should be mirrored by complexity in the structure of inhibition.
Date
Mar 12, 2023
Traveling UP states in the post-subiculum reveal an anatomical gradient of intrinsic properties
Dhruv Mehrotra, Daniel Levenstein, Adrian Duszkiewicz, Sam Booker, Angelika Kwiatkowska, Adrien Peyrache
Cortical activity is characterized by state-specific dynamics arising from the interplay between connectivity, cellular diversity, and intrinsic properties. During non-Rapid Eye Movement (NREM) sleep, cortical population activity alternates between periods of neuronal firing (“UP” states) and neuronal silence (“DOWN” states). Patterns of neuronal activity at DOWN-to-UP (DU) transitions have functional relevance beyond sleep: they are related to sensory coding during wakefulness and support homeostatic processes and memory consolidation. Despite this functional importance, the factors that organize these spiking patterns remain unknown but mechanisms that rely on network connectivity or intrinsic excitability have been proposed. In order to elucidate the mechanisms that organize spontaneous activity, we recorded populations of neurons in the head-direction cortex (HDC, i.e., post-subiculum), where the behavioral correlates of most neurons are well accounted for. Neuronal tuning to HD was independent of anatomical position. However, while UP-DOWN (UD) transitions were synchronous along the dorsoventral (DV) axis, we observed sequential activation of neurons at DU transitions. To understand the mechanisms underlying these traveling waves at UP state onset, we built a computational model with a linear array of recurrently connected adapting units and compared the effects of different biophysical gradients. We found that, unlike gradients in local connectivity, excitability/input, and adaptive current, a gradient in rectifying current (Ih) was able to uniquely reproduce the experimental observations, and predict a yet-unobserved relationship between UP onset and post-DOWN rebound activity. Subsequent ex vivo intracellular recordings confirmed the predicted DV gradient of Ih in HDC. In conclusion, precisely organized spontaneous population activity patterns may be independent of circuit features and sensory coding but instead may only reflect intrinsic neuronal properties. Yet, the resulting traveling waves have the potential to anatomically segment computation in output structures like the medial entorhinal cortex (MEC) and indirectly, the hippocampus.
Date
Mar 12, 2023
Exploring novel approaches to auditory rehabilitation, we aim to demonstrate, in mice, the efficiency of an optogenetic cortical implant. Several studies have shown that mice can use patterned optogenetic stimulations of the sensory cortex to drive their behaviour. It was however never tested if it is possible to provide a detailed representation of sensory inputs through such stimulation patterns. To explore this key question for cortical implant devices, we developed a novel sensory encoding model based on a convolutional autoencoder, which is able to temporally compress and denoise 500ms sounds into a 10x10 array of stimulation sites while preserving latent space continuity and detailed sound information. To minimize spatial crosstalk between stimulation sites, we actually limit the latent representations to the 10 largest activations and impose spatial sparseness constraints during model training. We could then demonstrate that mice can discriminate these activity patterns when applied onto their auditory cortex using a video-projector setup for mesoscopic patterned optogenetic stimulation. After mastery of the discrimination task, we presented in catch trials various new patterns from the model and observed that several mice elicit similar behavioural categorization responses across patterns. This demonstrates that the artificial patterns imposed to auditory cortex produce a robust representation structure that can be used to solve a task. These results indicate that constrained autoencoder model can be used for generating artificial auditory perception via an array of cortical stimulators. We aim to further benchmark these artificial perceptions against already acquired auditory discrimination performances of normally-hearing mice.
Date
Mar 12, 2023
COSYNE 2023
The COSYNE 2023 conference provided an inclusive forum for exchanging experimental and theoretical approaches to problems in systems neuroscience, continuing the tradition of bringing together the computational neuroscience community. The main meeting was held in Montreal followed by post-conference workshops in Mont-Tremblant, fostering intensive discussions and collaboration.
Date
Mar 9, 2023
Neuromatch 5
Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit.
Date
Sep 27, 2022
COSYNE 2022
The annual Cosyne meeting provides an inclusive forum for the exchange of empirical and theoretical approaches to problems in systems neuroscience, in order to understand how neural systems function. The main meeting is single-track, with invited talks selected by the Executive Committee and additional talks and posters selected by the Program Committee based on submitted abstracts. The workshops feature in-depth discussion of current topics of interest in a small group setting.
Date
Mar 17, 2022