Walking
walking
Neural mechanisms of rhythmic motor control in Drosophila
All animal locomotion is rhythmic,whether it is achieved through undulatory movement of the whole body or the coordination of articulated limbs. Neurobiologists have long studied locomotor circuits that produce rhythmic activity with non-rhythmic input, also called central pattern generators (CPGs). However, the cellular and microcircuit implementation of a walking CPG has not been described for any limbed animal. New comprehensive connectomes of the fruit fly ventral nerve cord (VNC) provide an opportunity to study rhythmogenic walking circuits at a synaptic scale.We use a data-driven network modeling approach to identify and characterize a putative walking CPG in the Drosophila leg motor system.
Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms
Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Sampling the environment with body-brain rhythms
Since Darwin, comparative research has shown that most animals share basic timing capacities, such as the ability to process temporal regularities and produce rhythmic behaviors. What seems to be more exclusive, however, are the capacities to generate temporal predictions and to display anticipatory behavior at salient time points. These abilities are associated with subcortical structures like basal ganglia (BG) and cerebellum (CE), which are more developed in humans as compared to nonhuman animals. In the first research line, we investigated the basic capacities to extract temporal regularities from the acoustic environment and produce temporal predictions. We did so by adopting a comparative and translational approach, thus making use of a unique EEG dataset including 2 macaque monkeys, 20 healthy young, 11 healthy old participants and 22 stroke patients, 11 with focal lesions in the BG and 11 in the CE. In the second research line, we holistically explore the functional relevance of body-brain physiological interactions in human behavior. Thus, a series of planned studies investigate the functional mechanisms by which body signals (e.g., respiratory and cardiac rhythms) interact with and modulate neurocognitive functions from rest and sleep states to action and perception. This project supports the effort towards individual profiling: are individuals’ timing capacities (e.g., rhythm perception and production), and general behavior (e.g., individual walking and speaking rates) influenced / shaped by body-brain interactions?
Trial by trial predictions of subjective time from human brain activity
Our perception of time isn’t like a clock; it varies depending on other aspects of experience, such as what we see and hear in that moment. However, in everyday life, the properties of these simple features can change frequently, presenting a challenge to understanding real-world time perception based on simple lab experiments. We developed a computational model of human time perception based on tracking changes in neural activity across brain regions involved in sensory processing, using fMRI. By measuring changes in brain activity patterns across these regions, our approach accommodates the different and changing feature combinations present in natural scenarios, such as walking on a busy street. Our model reproduces people’s duration reports for natural videos (up to almost half a minute long) and, most importantly, predicts whether a person reports a scene as relatively shorter or longer–the biases in time perception that reflect how natural experience of time deviates from clock time
Connecting structure and function in early visual circuits
How does the brain interpret signals from the outside world? Walking through a park, you might take for granted the ease with which you can understand what you see. Rather than seeing a series of still snapshots, you are able to see simple, fluid movement — of dogs running, squirrels foraging, or kids playing basketball. You can track their paths and know where they are headed without much thought. “How does this process take place?” asks Rudy Behnia, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “For most of us, it’s hard to imagine a world where we can’t see motion, shapes, and color; where we can’t have a representation of the physical world in our head.” And yet this representation does not happen automatically — our brain has no direct connection with the outside world. Instead, it interprets information taken in by our senses. Dr. Behnia is studying how the brain builds these representations. As a starting point, she focuses on how we see motion
Visual Decisions in Natural Action
Natural behavior reveals the way that gaze serves the needs of the current task, and the complex cognitive control mechanisms that are involved. It has become increasingly clear that even the simplest actions involve complex decision processes that depend on an interaction of visual information, knowledge of the current environment, and the intrinsic costs and benefits of actions choices. I will explore these ideas in the context of walking in natural terrain, where we are able to recover the 3D structure of the visual environment. We show that subjects choose flexible paths that depend on the flatness of the terrain over the next few steps. Subjects trade off flatness with straightness of their paths towards the goal, indicating a nuanced trade-off between stability and energetic costs on both the time scale of the next step and longer-range constraints.
An optimal population code for global motion estimation in local direction-selective cells
Neuronal computations are matched to optimally encode the sensory information that is available and relevant for the animal. However, the physical distribution of sensory information is often shaped by the animal’s own behavior. One prominent example is the encoding of optic flow fields that are generated during self-motion of the animal and will therefore depend on the type of locomotion. How evolution has matched computational resources to the behavioral constraints of an animal is not known. Here we use in vivo two photon imaging to record from a population of >3.500 local-direction selective cells. Our data show that the local direction-selective T4/T5 neurons in Drosophila form a population code that is matched to represent optic flow fields generated during translational and rotational self-motion of the fly. This coding principle for optic flow is reminiscent to the population code of local direction-selective ganglion cells in the mouse retina, where four direction-selective ganglion cells encode four different axes of self-motion encountered during walking (Sabbah et al., 2017). However, in flies we find six different subtypes of T4 and T5 cells that, at the population level, represent six axes of self-motion of the fly. The four uniformly tuned T4/T5 subtypes described previously represent a local snapshot (Maisak et al. 2013). The encoding of six types of optic flow in the fly as compared to four types of optic flow in mice might be matched to the high degrees of freedom encountered during flight. Thus, a population code for optic flow appears to be a general coding principle of visual systems, resulting from convergent evolution, but matching the individual ethological constraints of the animal.
“Wasn’t there food around here?”: An Agent-based Model for Local Search in Drosophila
The ability to keep track of one’s location in space is a critical behavior for animals navigating to and from a salient location, and its computational basis is now beginning to be unraveled. Here, we tracked flies in a ring-shaped channel as they executed bouts of search triggered by optogenetic activation of sugar receptors. Unlike experiments in open field arenas, which produce highly tortuous search trajectories, our geometrically constrained paradigm enabled us to monitor flies’ decisions to move toward or away from the fictive food. Our results suggest that flies use path integration to remember the location of a food site even after it has disappeared, and flies can remember the location of a former food site even after walking around the arena one or more times. To determine the behavioral algorithms underlying Drosophila search, we developed multiple state transition models and found that flies likely accomplish path integration by combining odometry and compass navigation to keep track of their position relative to the fictive food. Our results indicate that whereas flies re-zero their path integrator at food when only one feeding site is present, they adjust their path integrator to a central location between sites when experiencing food at two or more locations. Together, this work provides a simple experimental paradigm and theoretical framework to advance investigations of the neural basis of path integration.
A balancing act: goal-oriented control of stability reflexes by visual feedback
During the course of an animal’s interaction with its environments, activity within central neural circuits is orchestrated exquisitely to structure goal-oriented movement. During walking, for example, the head, body and limbs are coordinated in distinctive ways that are guided by the task at play, and also by posture and balance requirements. Hence, the overall performance of goal-oriented walking depends on the interplay between task-specific motor plans and stability reflexes. Copies of motor plans, typically described by the term efference copy, modulate stability reflexes in a predictive manner. However, the highly uncertain nature of natural environments indicates that the effect of efferent copy on movement control is insufficient; additional mechanisms must exist to regulate stability reflexes and coordinate motor programs flexibly under non-predictable conditions. In this talk, I will discuss our recent work examining how self-generated visual signals orchestrate the interplay between task-specific motor plans and stability reflexes during a self-paced, goal-oriented walking behavior.
Rapid State Changes Account for Apparent Brain and Behavior Variability
Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.
Reward foraging task, and model-based analysis reveal how fruit flies learn the value of available options
Understanding what drives foraging decisions in animals requires careful manipulation of the value of available options while monitoring animal choices. Value-based decision-making tasks, in combination with formal learning models, have provided both an experimental and theoretical framework to study foraging decisions in lab settings. While these approaches were successfully used in the past to understand what drives choices in mammals, very little work has been done on fruit flies. This is even though fruit flies have served as a model organism for many complex behavioural paradigms. To fill this gap we developed a single-animal, trial-based decision-making task, where freely walking flies experienced optogenetic sugar-receptor neuron stimulation. We controlled the value of available options by manipulating the probabilities of optogenetic stimulation. We show that flies integrate a reward history of chosen options and forget value of unchosen options. We further discover that flies assign higher values to rewards experienced early in the behavioural session, consistent with formal reinforcement learning models. Finally, we show that the probabilistic rewards affect walking trajectories of flies, suggesting that accumulated value is controlling the navigation vector of flies in a graded fashion. These findings establish the fruit fly as a model organism to explore the genetic and circuit basis of value-based decisions.
The complexity of the ordinary – neural control of locomotion
Today, considerable information is available on the organization and operation of the neural networks that generate the motor output for animal locomotion, such as swimming, walking, or flying. In recent years, the question of which neural mechanisms are responsible for task-specific and flexible adaptations of locomotor patterns has gained increased attention in the field of motor control. I will report on advances we made with respect to this topic for walking in insects, i.e. the leg muscle control system of phasmids and fruit flies. I will present insights into the neural basis of speed control, heading, walking direction, and the role of ground contact in insect walking, both for local control and intersegmental coordination. For these changes in motor activity modifications in the processing of sensory feedback signals play a pivotal role, for instance for movement and load signals in heading and curve walking or for movement signals that contribute to intersegmental coordination. Our recent findings prompt future investigations that aim to elucidate the mechanisms by which descending and intersegmental signals interact with local networks in the generation of motor flexibility during walking in animals.
Neural mechanisms of proprioception and motor control in Drosophila
Animals rely on an internal sense of body position and movement to effectively control motor behaviour. This sense of proprioception is mediated by diverse populations of internal mechanosensory neurons distributed throughout the body. My lab is trying to understand how proprioceptive stimuli are detected by sensory neurons, integrated and transformed in central circuits, and used to guide motor output. We approach these questions using genetic tools, in vivo two-photon imaging, and patch-clamp electrophysiology in Drosophila. We recently found that the axons of fly leg proprioceptors are organized into distinct functional projections that contain topographic representations of specific kinematic features: one group of axons encodes tibia position, another encodes movement direction, and a third encodes bidirectional movement and vibration frequency. Whole-cell recordings from downstream neurons reveal that position, movement, and directional information remain segregated in central circuits. These feedback signals then converge upon motor neurons that control leg muscles during walking. Overall, our findings reveal how a low-dimensional stimulus – the angle of a single leg joint – is encoded by a diverse population of mechanosensory neurons. Specific proprioceptive parameters are initially processed by parallel pathways, but are ultimately integrated to influence motor output. This architecture may help to maximize information transmission, processing speed, and robustness, which are critical for feedback control of the limbs during adaptive locomotion.
Algorithms and circuits for olfactory navigation in walking Drosophila
Olfactory navigation provides a tractable model for studying the circuit basis of sensori-motor transformations and goal-directed behaviour. Macroscopic organisms typically navigate in odor plumes that provide a noisy and uncertain signal about the location of an odor source. Work in many species has suggested that animals accomplish this task by combining temporal processing of dynamic odor information with an estimate of wind direction. Our lab has been using adult walking Drosophila to understand both the computational algorithms and the neural circuits that support navigation in a plume of attractive food odor. We developed a high-throughput paradigm to study behavioural responses to temporally-controlled odor and wind stimuli. Using this paradigm we found that flies respond to a food odor (apple cider vinegar) with two behaviours: during the odor they run upwind, while after odor loss they perform a local search. A simple computational model based one these two responses is sufficient to replicate many aspects of fly behaviour in a natural turbulent plume. In on-going work, we are seeking to identify the neural circuits and biophysical mechanisms that perform the computations delineated by our model. Using electrophysiology, we have identified mechanosensory neurons that compute wind direction from movements of the two antennae and central mechanosensory neurons that encode wind direction are are involved in generating a stable downwind orientation. Using optogenetic activation, we have traced olfactory circuits capable of evoking upwind orientation and offset search from the periphery, through the mushroom body and lateral horn, to the central complex. Finally, we have used optogenetic activation, in combination with molecular manipulation of specific synapses, to localize temporal computations performed on the odor signal to olfactory transduction and transmission at specific synapses. Our work illustrates how the tools available in fruit fly can be applied to dissect the mechanisms underlying a complex goal-directed behaviour.
A feedback model for predicting targeted perturbations of proprioceptors during fly walking
COSYNE 2022
A feedback model for predicting targeted perturbations of proprioceptors during fly walking
COSYNE 2022
Optimal Multimodal Integration Supports Course Control Under Uncertainty in Walking Drosophila
COSYNE 2022
Optimal Multimodal Integration Supports Course Control Under Uncertainty in Walking Drosophila
COSYNE 2022
Walking elicits global brain activity in adult Drosophila
COSYNE 2022
Walking elicits global brain activity in adult Drosophila
COSYNE 2022
Descending control of turning during walking
COSYNE 2023
Connectome simulations reveal a putative central pattern generator microcircuit for fly walking
COSYNE 2025
Deep imitation learning for neuromechanical control: realistic walking in an embodied fly
COSYNE 2025
Human Anterior Cingulate Dynamics During Effortful Walking
COSYNE 2025
Walking fruit flies use directional memory in olfactory navigation
COSYNE 2025
Dissecting the requirements for biological repair and restoration of walking following increasingly severe spinal cord injuries at different timepoints
FENS Forum 2024
Frequency tagging in the sensorimotor cortex is enhanced by footstep sounds compared to visual information movement in a walking movement integration task
FENS Forum 2024