Latest

SeminarNeuroscience

sensorimotor control, mouvement, touch, EEG

Marieva Vlachou
Institut des Sciences du Mouvement Etienne Jules Marey, Aix-Marseille Université/CNRS, France
Dec 19, 2025

Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.

SeminarNeuroscience

The Unconscious Eye: What Involuntary Eye Movements Reveal About Brain Processing

Yoram Bonneh
Bar-Ilan
Jun 10, 2025
SeminarNeuroscienceRecording

Altered grid-like coding in early blind people and the role of vision in conceptual navigation

Roberto Bottini
CIMeC, University of Trento
Mar 6, 2025
SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 12, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscience

Mouse Motor Cortex Circuits and Roles in Oromanual Behavior

Gordon Shepherd
Northwestern University
Jan 14, 2025

I’m interested in structure-function relationships in neural circuits and behavior, with a focus on motor and somatosensory areas of the mouse’s cortex involved in controlling forelimb movements. In one line of investigation, we take a bottom-up, cellularly oriented approach and use optogenetics, electrophysiology, and related slice-based methods to dissect cell-type-specific circuits of corticospinal and other neurons in forelimb motor cortex. In another, we take a top-down ethologically oriented approach and analyze the kinematics and cortical correlates of “oromanual” dexterity as mice handle food. I'll discuss recent progress on both fronts.

SeminarNeuroscienceRecording

Continuous guidance of human goal-directed movements

Eli Brenner
VU University Amsterdam
Dec 10, 2024
SeminarNeuroscience

Mind Perception and Behaviour: A Study of Quantitative and Qualitative Effects

Alan Kingstone
University of British Columbia
Nov 19, 2024
SeminarNeuroscience

Imagining and seeing: two faces of prosopagnosia

Jason Barton
University of British Columbia
Nov 5, 2024
SeminarNeuroscienceRecording

Cell-type-specific plasticity shapes neocortical dynamics for motor learning

Shouvik Majumder
Max Planck Florida Institute of Neuroscience, USA
Apr 18, 2024

How do cortical circuits acquire new dynamics that drive learned movements? This webinar will focus on mouse premotor cortex in relation to learned lick-timing and explore high-density electrophysiology using our silicon neural probes alongside region and cell-type-specific acute genetic manipulations of proteins required for synaptic plasticity.

SeminarNeuroscience

Sensory Consequences of Visual Actions

Martin Rolfs
Humboldt-Universität zu Berlin
Dec 8, 2023

We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.

SeminarNeuroscience

Movements and engagement during decision-making

Anne Churchland
University of California Los Angeles, USA
Nov 8, 2023

When experts are immersed in a task, a natural assumption is that their brains prioritize task-related activity. Accordingly, most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. Surprisingly, we observed that during decision-making, the cortex-wide activity of multiple cell types is dominated by movements, especially “uninstructed movements”, that are spontaneously expressed. These observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity. To understand the relationship between these movements and decision-making, we examined the movements more closely. We tested whether the magnitude or the timing of the movements was correlated with decision-making performance. To do this, we partitioned movements into two groups: task-aligned movements that were well predicted by task events (such as the onset of the sensory stimulus or choice) and task independent movement (TIM) that occurred independently of task events. TIM had a reliable, inverse correlation with performance in head-restrained mice and freely moving rats. This hinted that the timing of spontaneous movements could indicate periods of disengagement. To confirm this, we compared TIM to the latent behavioral states recovered by a hidden Markov model with Bernoulli generalized linear model observations (GLM-HMM) and found these, again, to be inversely correlated. Finally, we examined the impact of these behavioral states on neural activity. Surprisingly, we found that the same movement impacts neural activity more strongly when animals are disengaged. An intriguing possibility is that these larger movement signals disrupt cognitive computations, leading to poor decision-making performance. Taken together, these observations argue that movements and cognitionare closely intertwined, even during expert decision-making.

SeminarNeuroscience

Computational models of spinal locomotor circuitry

Simon Danner
Drexel University, Philadelphia, USA
Jun 14, 2023

To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.

SeminarNeuroscienceRecording

The Effects of Movement Parameters on Time Perception

Keri Anne Gladhill
Florida State University, Tallahassee, Florida.
May 31, 2023

Mobile organisms must be capable of deciding both where and when to move in order to keep up with a changing environment; therefore, a strong sense of time is necessary, otherwise, we would fail in many of our movement goals. Despite this intrinsic link between movement and timing, only recently has research begun to investigate the interaction. Two primary effects that have been observed include: movements biasing time estimates (i.e., affecting accuracy) as well as making time estimates more precise. The goal of this presentation is to review this literature, discuss a Bayesian cue combination framework to explain these effects, and discuss the experiments I have conducted to test the framework. The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement

SeminarNeuroscienceRecording

From following dots to understanding scenes

Alexander Göttker
Giessen
May 2, 2023
SeminarNeuroscience

Multidimensional cerebellar computations for flexible kinematic control of movements

Sungho Hong
Mar 10, 2023
SeminarNeuroscienceRecording

Direction-selective ganglion cells in primate retina: a subcortical substrate for reflexive gaze stabilization?

Teresa Puthussery
University of California, Berkeley
Jan 23, 2023

To maintain a stable and clear image of the world, our eyes reflexively follow the direction in which a visual scene is moving. Such gaze stabilization mechanisms reduce image blur as we move in the environment. In non-primate mammals, this behavior is initiated by ON-type direction-selective ganglion cells (ON-DSGCs), which detect the direction of image motion and transmit signals to brainstem nuclei that drive compensatory eye movements. However, ON-DSGCs have not yet been functionally identified in primates, raising the possibility that the visual inputs that drive this behavior instead arise in the cortex. In this talk, I will present molecular, morphological and functional evidence for identification of an ON-DSGC in macaque retina. The presence of ON-DSGCs highlights the need to examine the contribution of subcortical retinal mechanisms to normal and aberrant gaze stabilization in the developing and mature visual system. More generally, our findings demonstrate the power of a multimodal approach to study sparsely represented primate RGC types.

SeminarNeuroscience

Neural circuits for body movements

Silvia Arber
University of Basel, Switzerland
Jan 16, 2023
SeminarNeuroscience

Real-world scene perception and search from foveal to peripheral vision

Antje Nuthmann
Kiel University
Oct 24, 2022

A high-resolution central fovea is a prominent design feature of human vision. But how important is the fovea for information processing and gaze guidance in everyday visual-cognitive tasks? Following on from classic findings for sentence reading, I will present key results from a series of eye-tracking experiments in which observers had to search for a target object within static or dynamic images of real-world scenes. Gaze-contingent scotomas were used to selectively deny information processing in the fovea, parafovea, or periphery. Overall, the results suggest that foveal vision is less important and peripheral vision is more important for scene perception and search than previously thought. The importance of foveal vision was found to depend on the specific requirements of the task. Moreover, the data support a central-peripheral dichotomy in which peripheral vision selects and central vision recognizes.

SeminarNeuroscience

Development and evolution of neuronal connectivity

Alain Chédotal
Vision Institute, Paris, France
Sep 28, 2022

In most animal species including humans, commissural axons connect neurons on the left and right side of the nervous system. In humans, abnormal axon midline crossing during development causes a whole range of neurological disorders ranging from congenital mirror movements, horizontal gaze palsy, scoliosis or binocular vision deficits. The mechanisms which guide axons across the CNS midline were thought to be evolutionary conserved but our recent results suggesting that they differ across vertebrates.  I will discuss the evolution of visual projection laterality during vertebrate evolution.  In most vertebrates, camera-style eyes contain retinal ganglion cell (RGC) neurons projecting to visual centers on both sides of the brain. However, in fish, RGCs are thought to only innervate the contralateral side. Using 3D imaging and tissue clearing we found that bilateral visual projections exist in non-teleost fishes. We also found that the developmental program specifying visual system laterality differs between fishes and mammals. We are currently using various strategies to discover genes controlling the development of visual projections. I will also present ongoing work using 3D imaging techniques to study the development of the visual system in human embryo.

SeminarNeuroscienceRecording

A neural mechanism for terminating decisions

Gabriel Stine
Shadlen Lab, Columbia University
Sep 21, 2022

The brain makes decisions by accumulating evidence until there is enough to stop and choose. Neural mechanisms of evidence accumulation are well established in association cortex, but the site and mechanism of termination is unknown. Here, we elucidate a mechanism for termination by neurons in the primate superior colliculus. We recorded simultaneously from neurons in lateral intraparietal cortex (LIP) and the superior colliculus (SC) while monkeys made perceptual decisions, reported by eye-movements. Single-trial analyses revealed distinct dynamics: LIP tracked the accumulation of evidence on each decision, and SC generated one burst at the end of the decision, occasionally preceded by smaller bursts. We hypothesized that the bursts manifest a threshold mechanism applied to LIP activity to terminate the decision. Focal inactivation of SC produced behavioral effects diagnostic of an impaired threshold sensor, requiring a stronger LIP signal to terminate a decision. The results reveal the transformation from deliberation to commitment.

SeminarNeuroscience

Perception during visual disruptions

Grace Edwards and Lina Teichmann
National Institute of Mental Health, Laboratory of Brain and Cognition, U.S. Department of Health and Human Services.
Jun 13, 2022

Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption. Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.

SeminarNeuroscience

Adaptive neural network classifier for decoding finger movements

Alexey Zabolotniy
HSE University
Jun 2, 2022

While non-invasive Brain-to-Computer interface can accurately classify the lateralization of hand moments, the distinction of fingers activation in the same hand is limited by their local and overlapping representation in the motor cortex. In particular, the low signal-to-noise ratio restrains the opportunity to identify meaningful patterns in a supervised fashion. Here we combined Magnetoencephalography (MEG) recordings with advanced decoding strategy to classify finger movements at single trial level. We recorded eight subjects performing a serial reaction time task, where they pressed four buttons with left and right index and middle fingers. We evaluated the classification performance of hand and finger movements with increasingly complex approaches: supervised common spatial patterns and logistic regression (CSP + LR) and unsupervised linear finite convolutional neural network (LF-CNN). The right vs left fingers classification performance was accurate above 90% for all methods. However, the classification of the single finger provided the following accuracy: CSP+SVM : – 68 ± 7%, LF-CNN : 71 ± 10%. CNN methods allowed the inspection of spatial and spectral patterns, which reflected activity in the motor cortex in the theta and alpha ranges. Thus, we have shown that the use of CNN in decoding MEG single trials with low signal to noise ratio is a promising approach that, in turn, could be extended to a manifold of problems in clinical and cognitive neuroscience.

SeminarNeuroscienceRecording

What the fly’s eye tells the fly’s brain…and beyond

Gwyneth Card
Janelia Research Campus, HHMI
Jun 1, 2022

Fly Escape Behaviors: Flexible and Modular We have identified a set of escape maneuvers performed by a fly when confronted by a looming object. These escape responses can be divided into distinct behavioral modules. Some of the modules are very stereotyped, as when the fly rapidly extends its middle legs to jump off the ground. Other modules are more complex and require the fly to combine information about both the location of the threat and its own body posture. In response to an approaching object, a fly chooses some varying subset of these behaviors to perform. We would like to understand the neural process by which a fly chooses when to perform a given escape behavior. Beyond an appealing set of behaviors, this system has two other distinct advantages for probing neural circuitry. First, the fly will perform escape behaviors even when tethered such that its head is fixed and neural activity can be imaged or monitored using electrophysiology. Second, using Drosophila as an experimental animal makes available a rich suite of genetic tools to activate, silence, or image small numbers of cells potentially involved in the behaviors. Neural Circuits for Escape Until recently, visually induced escape responses have been considered a hardwired reflex in Drosophila. White-eyed flies with deficient visual pigment will perform a stereotyped middle-leg jump in response to a light-off stimulus, and this reflexive response is known to be coordinated by the well-studied giant fiber (GF) pathway. The GFs are a pair of electrically connected, large-diameter interneurons that traverse the cervical connective. A single GF spike results in a stereotyped pattern of muscle potentials on both sides of the body that extends the fly's middle pair of legs and starts the flight motor. Recently, we have found that a fly escaping a looming object displays many more behaviors than just leg extension. Most of these behaviors could not possibly be coordinated by the known anatomy of the GF pathway. Response to a looming threat thus appears to involve activation of numerous different neural pathways, which the fly may decide if and when to employ. Our goal is to identify the descending pathways involved in coordinating these escape behaviors as well as the central brain circuits, if any, that govern their activation. Automated Single-Fly Screening We have developed a new kind of high-throughput genetic screen to automatically capture fly escape sequences and quantify individual behaviors. We use this system to perform a high-throughput genetic silencing screen to identify cell types of interest. Automation permits analysis at the level of individual fly movements, while retaining the capacity to screen through thousands of GAL4 promoter lines. Single-fly behavioral analysis is essential to detect more subtle changes in behavior during the silencing screen, and thus to identify more specific components of the contributing circuits than previously possible when screening populations of flies. Our goal is to identify candidate neurons involved in coordination and choice of escape behaviors. Measuring Neural Activity During Behavior We use whole-cell patch-clamp electrophysiology to determine the functional roles of any identified candidate neurons. Flies perform escape behaviors even when their head and thorax are immobilized for physiological recording. This allows us to link a neuron's responses directly to an action.

SeminarNeuroscience

In pursuit of a universal, biomimetic iBCI decoder: Exploring the manifold representations of action in the motor cortex

Lee Miller
Northwestern University
May 20, 2022

My group pioneered the development of a novel intracortical brain computer interface (iBCI) that decodes muscle activity (EMG) from signals recorded in the motor cortex of animals. We use these synthetic EMG signals to control Functional Electrical Stimulation (FES), which causes the muscles to contract and thereby restores rudimentary voluntary control of the paralyzed limb. In the past few years, there has been much interest in the fact that information from the millions of neurons active during movement can be reduced to a small number of “latent” signals in a low-dimensional manifold computed from the multiple neuron recordings. These signals can be used to provide a stable prediction of the animal’s behavior over many months-long periods, and they may also provide the means to implement methods of transfer learning across individuals, an application that could be of particular importance for paralyzed human users. We have begun to examine the representation within this latent space, of a broad range of behaviors, including well-learned, stereotyped movements in the lab, and more natural movements in the animal’s home cage, meant to better represent a person’s daily activities. We intend to develop an FES-based iBCI that will restore voluntary movement across a broad range of motor tasks without need for intermittent recalibration. However, the nonlinearities and context dependence within this low-dimensional manifold present significant challenges.

SeminarNeuroscienceRecording

Visualization and manipulation of our perception and imagery by BCI

Takufumi Yanagisawa
Osaka University
Apr 1, 2022

We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.

SeminarNeuroscience

Learning binds novel inputs into functional synaptic clusters via spinogenesis

Nathan Hedrick
UCSD
Mar 30, 2022

Learning is known to induce the formation of new dendritic spines, but despite decades of effort, the functional properties of new spines in vivo remain unknown. Here, using a combination of longitudinal in vivo 2-photon imaging of the glutamate reporter, iGluSnFR, and correlated electron microscopy (CLEM) of dendritic spines on the apical dendrites of L2/3 excitatory neurons in the motor cortex during motor learning, we describe a framework of new spines' formation, survival, and resulting function. Specifically, our data indicate that the potentiation of a subset of clustered, pre-existing spines showing task-related activity in early sessions of learning creates a micro-environment of plasticity within dendrites, wherein multiple filopodia sample the nearby neuropil, form connections with pre-existing boutons connected to allodendritic spines, and are then selected for survival based on co-activity with nearby task-related spines. Thus, the formation and survival of new spines is determined by the functional micro-environment of dendrites. After formation, new spines show preferential co-activation with nearby task-related spines. This synchronous activity is more specific to movements than activation of the individual spines in isolation, and further, is coincident with movements that are more similar to the learned pattern. Thus, new spines functionally engage with their parent clusters to signal the learned movement. Finally, by reconstructing the axons associated with new spines, we found that they synapse with axons previously unrepresented in these dendritic domains, suggesting that the strong local co-activity structure exhibited by new spines is likely not due to axon sharing. Thus, learning involves the binding of new information streams into functional synaptic clusters to subserve the learned behavior.

SeminarNeuroscienceRecording

Dynamic dopaminergic signaling probabilistically controls the timing of self-timed movements

Allison Hamilos
Assad Lab, Harvard University
Feb 23, 2022

Human movement disorders and pharmacological studies have long suggested molecular dopamine modulates the pace of the internal clock. But how does the endogenous dopaminergic system influence the timing of our movements? We examined the relationship between dopaminergic signaling and the timing of reward-related, self-timed movements in mice. Animals were trained to initiate licking after a self-timed interval following a start cue; reward was delivered if the animal’s first lick fell within a rewarded window (3.3-7 s). The first-lick timing distributions exhibited the scalar property, and we leveraged the considerable variability in these distributions to determine how the activity of the dopaminergic system related to the animals’ timing. Surprisingly, dopaminergic signals ramped-up over seconds between the start-timing cue and the self-timed movement, with variable dynamics that predicted the movement/reward time, even on single trials. Steeply rising signals preceded early initiation, whereas slowly rising signals preceded later initiation. Higher baseline signals also predicted earlier self-timed movement. Optogenetic activation of dopamine neurons during self-timing did not trigger immediate movements, but rather caused systematic early-shifting of the timing distribution, whereas inhibition caused late-shifting, as if dopaminergic manipulation modulated the moment-to-moment probability of unleashing the planned movement. Consistent with this view, the dynamics of the endogenous dopaminergic signals quantitatively predicted the moment-by-moment probability of movement initiation. We conclude that ramping dopaminergic signals, potentially encoding dynamic reward expectation, probabilistically modulate the moment-by-moment decision of when to move. (Based on work from Hamilos et al., eLife, 2021).

SeminarNeuroscienceRecording

The vestibular system: a multimodal sense

Elisa Raffaella Ferre
Birkbeck, University of London
Jan 20, 2022

The vestibular system plays an essential role in everyday life, contributing to a surprising range of functions from reflexes to the highest levels of perception and consciousness. Three orthogonal semicircular canals detect rotational movements of the head and the otolith organs sense translational acceleration, including the gravitational vertical. But, how vestibular signals are encoded by the human brain? We have recently combined innovative methods for eliciting virtual rotation and translation sensations with fMRI to identify brain areas representing vestibular signals. We have identified a bilateral inferior parietal, ventral premotor/anterior insula and prefrontal network and confirmed that these areas reliably possess information about the rotation and translation. We have also investigated how vestibular signals are integrated with other sensory cues to generate our perception of the external environment.

SeminarNeuroscienceRecording

NMC4 Short Talk: Brain-inspired spiking neural network controller for a neurorobotic whisker system

Alberto Antonietti
University of Pavia
Dec 2, 2021

It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model to study active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was properly connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.

SeminarNeuroscienceRecording

NMC4 Short Talk: Decoding finger movements from human posterior parietal cortex

Charles Guan
California Institute of Technology
Dec 1, 2021

Restoring hand function is a top priority for individuals with tetraplegia. This challenge motivates considerable research on brain-computer interfaces (BCIs), which bypass damaged neural pathways to control paralyzed or prosthetic limbs. Here, we demonstrate the BCI control of a prosthetic hand using intracortical recordings from the posterior parietal cortex (PPC). As part of an ongoing clinical trial, two participants with cervical spinal cord injury were each implanted with a 96-channel array in the left PPC. Across four sessions each, we recorded neural activity while they attempted to press individual fingers of the contralateral (right) hand. Single neurons modulated selectively for different finger movements. Offline, we accurately classified finger movements from neural firing rates using linear discriminant analysis (LDA) with cross-validation (accuracy = 90%; chance = 17%). Finally, the participants used the neural classifier online to control all five fingers of a BCI hand. Online control accuracy (86%; chance = 17%) exceeded previous state-of-the-art finger BCIs. Furthermore, offline, we could classify both flexion and extension of the right fingers, as well as flexion of all ten fingers. Our results indicate that neural recordings from PPC can be used to control prosthetic fingers, which may help contribute to a hand restoration strategy for people with tetraplegia.

SeminarNeuroscience

The dynamics of temporal attention

Rachel Denison
Boston University
Nov 24, 2021

Selection is the hallmark of attention: processing improves for attended items but is relatively impaired for unattended items. It is well known that visual spatial attention changes sensory signals and perception in this selective fashion. In the work I will present, we asked whether and how attentional selection happens across time. First, our experiments revealed that voluntary temporal attention (attention to specific points in time) is selective, resulting in perceptual tradeoffs across time. Second, we measured small eye movements called microsaccades and found that directing voluntary temporal attention increases the stability of the eyes in anticipation of an attended stimulus. Third, we developed a computational model of dynamic attention, which proposes specific mechanisms underlying temporal attention and its selectivity. Lastly, I will mention how we are testing predictions of the model with MEG. Altogether, this research shows how precisely timed voluntary attention helps manage inherent limits in visual processing across short time intervals, advancing our understanding of attention as a dynamic process.

SeminarNeuroscienceRecording

Space and its computational challenges

Jennifer Groh
Duke University
Nov 18, 2021

How our senses work both separately and together involves rich computational problems. I will discuss the spatial and representational problems faced by the visual and auditory system, focusing on two issues. 1. How does the brain correct for discrepancies in the visual and auditory spatial reference frames? I will describe our recent discovery of a novel type of otoacoustic emission, the eye movement related eardrum oscillation, or EMREO (Gruters et al, PNAS 2018). 2. How does the brain encode more than one stimulus at a time? I will discuss evidence for neural time-division multiplexing, in which neural activity fluctuates across time to allow representations to encode more than one simultaneous stimulus (Caruso et al, Nat Comm 2018). These findings all emerged from experimentally testing computational models regarding spatial representations and their transformations within and across sensory pathways. Further, they speak to several general problems confronting modern neuroscience such as the hierarchical organization of brain pathways and limits on perceptual/cognitive processing.

SeminarNeuroscience

Looking and listening while moving

Tom Freeman
Cardiff University
Nov 17, 2021

In this talk I’ll discuss our recent work on how visual and auditory cues to space are integrated as we move. There are at least 3 reasons why this turns out to be a difficult problem for the brain to solve (and us to understand!). First, vision and hearing start off in different coordinates (eye-centred vs head-centred), so they need a common reference frame in which to communicate. By preventing eye and head movements, this problem has been neatly sidestepped in the literature, yet self-movement is the norm. Second, self-movement creates visual and auditory image motion. Correct interpretation therefore requires some form of compensation. Third, vision and hearing encode motion in very different ways: vision contains dedicated motion detectors sensitive to speed, whereas hearing does not. We propose that some (all?) of these problems could be solved by considering the perception of audiovisual space as the integration of separate body-centred visual and auditory cues, the latter formed by integrating image motion with motor system signals and vestibular information. To test this claim, we use a classic cue integration framework, modified to account for cues that are biased and partially correlated. We find good evidence for the model based on simple judgements of audiovisual motion within a circular array of speakers and LEDs that surround the participant while they execute self-controlled head movement.

SeminarNeuroscienceRecording

The role of high- and low-level factors in smooth pursuit of predictable and random motions

Eileen Kowler
Rutgers
Oct 19, 2021

Smooth pursuit eye movements are among our most intriguing motor behaviors. They are able to keep the line of sight on smoothly moving targets with little or no overt effort or deliberate planning, and they can respond quickly and accurately to changes in the trajectory of motion of targets. Nevertheless, despite these seeming automatic characteristics, pursuit is highly sensitive to high-level factors, such as the choices made about attention, or beliefs about the direction of upcoming motion. Investigators have struggled for decades with the problem of incorporating both high- and low-level processes into a single coherent model. This talk will present an overview of the current state of efforts to incorporate high- and low-level influences, as well as new observations that add to our understanding of both types of influences. These observations (in contrast to much of the literature) focus on the directional properties of pursuit. Studies will be presented that show: (1) the direction of smooth pursuit made to pursue fields of noisy random dots depends on the relative reliability of the sensory signal and the expected motion direction; (2) smooth pursuit shows predictive responses that depend on the interpretation of cues that signal an impending collision; and (3) smooth pursuit during a change in target direction displays kinematic properties consistent with the well-known two-thirds power law. Implications for incorporating high- and low-level factors into the same framework will be discussed.

SeminarNeuroscience

The role of motion in localizing objects

Patrick Cavanagh
Department of Psychological and Brain Research, Dartmouth College
Sep 13, 2021

Everything we see has a location. We know where things are before we know what they are. But how do we know where things are? Receptive fields in the visual system specify location but neural delays lead to serious errors whenever targets or eyes are moving. Motion may be the problem here but motion can also be the solution, correcting for the effects of delays and eye movements. To demonstrate this, I will present results from three motion illusions where perceived location differs radically from physical location. These help understand how and where position is coded. We first look at the effects of a target’s simple forward motion on its perceived location. Second, we look at perceived location of a target that has internal motion as well as forward motion. The two directions combine to produce an illusory path. This “double-drift” illusion strongly affects perceived position but, surprisingly, not eye movements or attention. Even more surprising, fMRI shows that the shifted percept does not emerge in the visual cortex but is seen instead in the frontal lobes. Finally, we report that a moving frame also shifts the perceived positions of dots flashed within it. Participants report the dot positions relative to the frame, as if the frame were not moving. These frame-induced position effects suggest a link to visual stability where we see a steady world despite massive displacements during saccades. These motion-based effects on perceived location lead to new insights concerning how and where position is coded in the brain.

SeminarNeuroscience

Neural circuits that support robust and flexible navigation in dynamic naturalistic environments

Hannah Haberkern
HHMI Janelia Research Campus
Aug 16, 2021

Tracking heading within an environment is a fundamental requirement for flexible, goal-directed navigation. In insects, a head-direction representation that guides the animal’s movements is maintained in a conserved brain region called the central complex. Two-photon calcium imaging of genetically targeted neural populations in the central complex of tethered fruit flies behaving in virtual reality (VR) environments has shown that the head-direction representation is updated based on self-motion cues and external sensory information, such as visual features and wind direction. Thus far, the head direction representation has mainly been studied in VR settings that only give flies control of the angular rotation of simple sensory cues. How the fly’s head direction circuitry enables the animal to navigate in dynamic, immersive and naturalistic environments is largely unexplored. I have developed a novel setup that permits imaging in complex VR environments that also accommodate flies’ translational movements. I have previously demonstrated that flies perform visually-guided navigation in such an immersive VR setting, and also that they learn to associate aversive optogenetically-generated heat stimuli with specific visual landmarks. A stable head direction representation is likely necessary to support such behaviors, but the underlying neural mechanisms are unclear. Based on a connectomic analysis of the central complex, I identified likely circuit mechanisms for prioritizing and combining different sensory cues to generate a stable head direction representation in complex, multimodal environments. I am now testing these predictions using calcium imaging in genetically targeted cell types in flies performing 2D navigation in immersive VR.

SeminarNeuroscienceRecording

What are you looking at? Adventures in human gaze behaviour

Benjamin De Haas
Giessen University
Jun 29, 2021
SeminarNeuroscience

Causal coupling between neural activity, metabolism, and behavior across the Drosophila brain

Kevin Mann
Stanford School of Medicine
Jun 7, 2021

Coordinated activity across networks of neurons is a hallmark of both resting and active behavioral states in many species, including worms, flies, fish, mice and humans. These global patterns alter energy metabolism in the brain over seconds to hours, making oxygen consumption and glucose uptake widely used proxies of neural activity. However, whether changes in neural activity are causally related to changes in metabolic flux in intact circuits on the sub-second timescales associated with behavior, is unclear. Moreover, it is unclear whether differences between rest and action are associated with spatiotemporally structured changes in neuronal energy metabolism at the subcellular level. My work combines two-photon microscopy across the fruit fly brain with sensors that allow simultaneous measurements of neural activity and metabolic flux, across both resting and active behavioral states. It demonstrates that neural activity drives changes in metabolic flux, creating a tight coupling between these signals that can be measured across large-scale brain networks. Further, using local optogenetic perturbation, I show that even transient increases in neural activity result in rapid and persistent increases in cytosolic ATP, suggesting that neuronal metabolism predictively allocates resources to meet the energy demands of future neural activity. Finally, these studies reveal that the initiation of even minimal behavioral movements causes large-scale changes in the pattern of neural activity and energy metabolism, revealing unexpectedly widespread engagement of the central brain.

SeminarNeuroscienceRecording

Analogies in motor learning - acquisition and refinement of movement skills

Oryan Zacks
Tel Aviv University
May 27, 2021

Analogies are widely used by teachers and coaches of different movement disciplines, serving a role during the learning phase of a new skill, and honing one’s performance to a competitive level. In previous studies, analogies improved motor control in various tasks and across age groups. Our study aimed to evaluate the efficacy of analogies throughout the learning process, using kinematic measures for an in-depth analysis. We tested whether applying analogies can shorten the motor learning process and induce insight and skill improvement in tasks that usually demand many hours of practice. The experiment included a drawing task, in which subjects were asked to connect four dots into a closed shape, and a mirror game, in which subjects tracked an oval that moved across the screen. After establishing a baseline, subjects were given an analogy, explicit instructions, or no further instruction. We compared their improvement in overall skill, accuracy, and speed. Subjects in the analogy and explicit groups improved their performance in the drawing task, while significant differences were found in the mirror game only for slow movements between analogy and controls. In conclusion, analogies are an important tool for teachers and coaches, and more research is needed to understand how to apply them for maximum results. They can rapidly change motor control and strategy but may also affect only some aspects of a movement and not others. Careful thought is needed to construct an effective analogy that encompasses relevant movement facets, as well as the practitioner’s personal background and experience.

SeminarNeuroscienceRecording

Neural mechanisms of active vision in the marmoset monkey

Jude Mitchell
University of Rochester
May 12, 2021

Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY

SeminarNeuroscienceRecording

Sparse expansion in cerebellum favours learning speed and performance in the context of motor control

Adriana Perez Rotondo
University of Cambridge
Apr 14, 2021

The cerebellum contains more than half of the brain’s neurons and it is essential for motor control. Its neural circuits have a distinctive architecture comprised of a large, sparse expansion from the input mossy fibres to the granule cell layer. For years, theories of how cerebellar architectural features relate to cerebellar function have been formulated. It has been shown that some of these features can facilitate pattern separation. However, these theories don’t consider the need for it to learn fast in order to control smooth and accurate movements. Here, we confront this gap. This talk will show that the expansion to the granule cell layer in the cerebellar cortex improves learning speed and performance in the context of motor control by considering a cerebellar-like network learning an internal model of a motor apparatus online. By expressing the general form of the learning rate for such a system, this talk will provide a calculation of how increasing the number of granule cells diminishes the effect of noise and increases the learning speed. The researchers propose that the particular architecture of cerebellar circuits modifies the geometry of the error function in a favourable way for learning faster. Their results illuminate a new link between cerebellar structure and function.

SeminarNeuroscience

Generalizing theories of cerebellum-like learning

Ashok Litwin Kumar
Columbia University
Mar 19, 2021

Since the theories of Marr, Ito, and Albus, the cerebellum has provided an attractive well-characterized model system to investigate biological mechanisms of learning. In recent years, theories have been developed that provide a normative account for many features of the anatomy and function of cerebellar cortex and cerebellum-like systems, including the distribution of parallel fiber-Purkinje cell synaptic weights, the expansion in neuron number of the granule cell layer and their synaptic in-degree, and sparse coding by granule cells. Typically, these theories focus on the learning of random mappings between uncorrelated inputs and binary outputs, an assumption that may be reasonable for certain forms of associative conditioning but is also quite far from accounting for the important role the cerebellum plays in the control of smooth movements. I will discuss in-progress work with Marjorie Xie, Samuel Muscinelli, and Kameron Decker Harris generalizing these learning theories to correlated inputs and general classes of smooth input-output mappings. Our studies build on earlier work in theoretical neuroscience as well as recent advances in the kernel theory of wide neural networks. They illuminate the role of pre-expansion structures in processing input stimuli and the significance of sparse granule cell activity. If there is time, I will also discuss preliminary work with Jack Lindsey extending these theories beyond cerebellum-like structures to recurrent networks.

ePosterNeuroscience

Decoding Upper Limb Movements

Marie D. Schmidt, Ioannis Iossifidis

Bernstein Conference 2024

ePosterNeuroscience

Deep Reinforcement Learning mimics Neural Strategies for Limb Movements

Muhammad Noman Almani,Shreya Saxena

COSYNE 2022

ePosterNeuroscience

Exceptionally large rewards lead to a collapse in neural information about upcoming movements

Adam Smoulder,Patrick Marino,Nicholas Pavlovsky,Emily Oby,Sam Snyder,William Bishop,Byron Yu,Steven Chase,Aaron Batista

COSYNE 2022

ePosterNeuroscience

Facial movements and their neural correlates reveal latent decision variables in mice

Fanny Cazettes,Alfonso Renart,Zachary Mainen

COSYNE 2022

ePosterNeuroscience

Orienting eye movements during REM sleep

Yuta Senzai,Massimo Scanziani

COSYNE 2022

ePosterNeuroscience

Orienting eye movements during REM sleep

Yuta Senzai,Massimo Scanziani

COSYNE 2022

ePosterNeuroscience

Beyond task-optimized neural models: constraints from eye movements during navigation

Akis Stavropoulos, Kaushik Lakshminarasimhan, Dora Angelaki

COSYNE 2023

ePosterNeuroscience

Neural Manifolds Underlying Naturalistic Human Movements in Electrocorticography

Zoe Steine-Hanson, Rajesh P. N. Rao, Bing Brunton

COSYNE 2023

ePosterNeuroscience

Differential computations across multiple brain regions underlying dexterous movements

Ahmet Arac, Sanjay Shukla, Erica Nagase, Alan Yao, Nicolas Jeong Lee, Kate Santoso, Emily Stenzler, Kasey Kim, David Lipkin, Angela Kan, Christina Abdishoo

COSYNE 2025

ePosterNeuroscience

A musculoskeletal simulation of Drosophila to study the biomechanics of limb movements

Pembe Gizem Ozdil, Chuanfang Ning, Jasper Phelps, Auke Ijspeert, Pavan Ramdya

COSYNE 2025

ePosterNeuroscience

Altered lateralized readiness potential in stroke patients during healthy and paretic hand movements

Aleksandra Medvedeva, Nikolay Syrov, Yana Alieva, Lev Yakovlev, Daria Petrova, Galina Ivanova, Alexander Kaplan, Mikhail Lebedev

FENS Forum 2024

ePosterNeuroscience

Anatomo-functional diversity of medullary V2a neurons for limb and cranial nerve-mediated movements

Alexis d'Humières, Mathilde Gonin, Guillaume Le Goc, Giovanni Usseglio, Edwin Gatier, Julien Bouvier

FENS Forum 2024

ePosterNeuroscience

A behavioral setup for capturing fine-grained coordinated 3D movements of zebrafish larvae

Katharina Koetter, Nathan van Beelen, Ruben Portugues

FENS Forum 2024

ePosterNeuroscience

Blurring the line between imagination and reality: Motor imagery influences performance of linked movements

Magdalena Gippert, Pei-Cheng Shih, Tobias Heed, Ian Howard, Mina Jamshidi, Arno Villringer, Bernhard Sehm, Vadim Nikulin

FENS Forum 2024

ePosterNeuroscience

Changes in the amplitude of the task-evoked hemodynamic response during grip movements; simultaneous fNIRS and fMRI measurements

Satoshi Yamamoto, Hiroshi Kawaguchi, Daisuke Ishii, Yutaka Kohno

FENS Forum 2024

ePosterNeuroscience

Differential effects of working memory load during motor decision-making on planning and execution of goal-directed pointing movements

Melanie Krüger, Alexander Pleger

FENS Forum 2024

ePosterNeuroscience

Exploring gaze movements in lampreys: Insights into vertebrate neural mechanisms for stabilizing and goal-oriented eye movements​​​​​

Marta Barandela, Carmen Núñez-González, Cecilia Jiménez-López, Manuel A. Pombal, Juan Pérez-Fernández

FENS Forum 2024

ePosterNeuroscience

Non-linear and mixed encoding of body movements by individual Purkinje cells

Jorge Enrique Ramirez Buritica, Hugo Marques, Pedro Castelhanito, Diogo Duarte, Ana Gonçalves, Megan R. Carey

FENS Forum 2024

ePosterNeuroscience

Novel interdisciplinary intervention approach: Nurturing neuroplasticity through spontaneous unfolding movements

Erika Chovanec, Karl Garnitschnig

FENS Forum 2024

ePosterNeuroscience

In vivo widefield calcium imaging of cortical activity during reach-to-grasp movements in a mouse stroke model

Matteo Panzeri, Fritjof Helmchen, Anna Sophia Wahl

FENS Forum 2024

movements coverage

70 items

Seminar50
ePoster20
Domain spotlight

Explore how movements research is advancing inside Neuro.

Visit domain