song
Latest
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Basal Ganglia in Songbirds
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
Recurrent brainstem-forebrain loops in the control of vocal production in songbirds
The neural mechanisms for song evaluation in fruit flies
How does the brain decode the meaning of sound signals, such as music and courtship songs? We believe that the fruit fly Drosophila melanogaster is an ideal model for answering this question, as it offers a comprehensive range of tools and assays which allow us to dissect the mechanisms underlying sound perception and evaluation in the brain. During the courtship behavior, male fruit flies emit “courtship songs” by vibrating their wings. Interestingly, the fly song has a species-specific rhythm, which indeed increases the female’s receptivity for copulation as well as male’s courtship behavior itself. How song signals, especially the species-specific sound rhythm, are evaluated in the fly brain? To tackle this question, we are exploring the features of the fly auditory system systematically. In this lecture, I will talk about our recent findings on the neural basis for song evaluation in fruit flies.
Measuring behavior to measure the brain
Animals produce behavior by responding to a mixture of cues that arise both externally (sensory) and internally (neural dynamics and states). These cues are continuously produced and can be combined in different ways depending on the needs of the animal. However, the integration of these external and internal cues remains difficult to understand in natural behaviors. To address this gap, we have developed an unsupervised method to identify internal states from behavioral data, and have applied it to the study of a dynamic social interaction. During courtship, Drosophila melanogaster males pattern their songs using cues from their partner. This sensory-driven behavior dynamically modulates courtship directed at their partner. We use our unsupervised method to identify how the animal integrates sensory information into distinct underlying states. We then use this to identify the role of courtship neurons in either integrating incoming information or directing the production of the song, roles that were previously hidden. Our results reveal how animals compose behavior from previously unidentified internal states, a necessary step for quantitative descriptions of animal behavior that link environmental cues, internal needs, neuronal activity, and motor outputs.
Cellular homologies of birdsong control circuits
Variability, maintenance and learning in birdsong
The songbird zebra finch is an exemplary model system in which to study trial-and-error learning, as the bird learns its single song gradually through the production of many noisy renditions. It is also a good system in which to study the maintenance of motor skills, as the adult bird actively maintains its song and retains some residual plasticity. Motor learning occurs through the association of timing within the song, represented by sparse firing in nucleus HVC, with motor output, driven by nucleus RA. Here we show through modeling that the small level of observed variability in HVC can result in a network which is more easily able to adapt to change, and is most robust to cell damage or death, than an unperturbed network. In collaboration with Carlos Lois’ lab, we also consider the effect of directly perturbing HVC through viral injection of toxins that affect the firing of projection neurons. Following these perturbations, the song is profoundly affected but is able to almost perfectly recover. We characterize the changes in song acoustics and syntax, and propose models for HVC architecture and plasticity that can account for some of the observed effects. Finally, we suggest a potential role for inputs from nucleus Uva in helping to control timing precision in HVC.
CrossTalk Event with Susan Rogers & Ed Robertson
Join us for a conversation on the neuroscience of music, composition and song writing!
Brain Awareness Week by IIT Gandhinagar
The Brain Awareness Week by the Centre for Cognitive and Brain Sciences, IIT Gandhinagar spans across 7 days and invites you for a series of talks, panel discussions, competitions and workshops on topics ranging from 'Using songbirds to understand how the brain initiates movements' to 'Cognitive Science and UX in Game Design' by speakers from prestigious Indian and International institutes. Explore the marvels of the brain by joining us on 15th March. Free Registration.
Assembly of the neocortex
The symposium will start with Prof Song-Hai Shi who will present “Assembly of the neocortex”. Then, Dr Lynette Lim will talk about “Shared and Unique Developmental Trajectories of Cortical Inhibitory Neurons”. Dr Alfredo Molina will deal with the “Tuneable progenitor cells to build the cerebral cortex”, and Prof Tomasz Nowakowski will present “Charting the molecular 'protomap' of the human cerebral cortex using single cell genomic”.
Low dimensional models and electrophysiological experiments to study neural dynamics in songbirds
Birdsong emerges when a set of highly interconnected brain areas manage to generate a complex output. The similarities between birdsong production and human speech have positioned songbirds as unique animal models for studying learning and production of this complex motor skill. In this work, we developed a low dimensional model for a neural network in which the variables were the average activities of different neural populations within the nuclei of the song system. This neural network is active during production, perception and learning of birdsong. We performed electrophysiological experiments to record neural activity from one of these nuclei and found that the low dimensional model could reproduce the neural dynamics observed during the experiments. Also, this model could reproduce the respiratory motor patterns used to generate song. We showed that sparse activity in one of the neural nuclei could drive a more complex activity downstream in the neural network. This interdisciplinary work shows how low dimensional neural models can be a valuable tool for studying the emergence of complex motor tasks
A journey through connectomics: from manual tracing to the first fully automated basal ganglia connectomes
The "mind of the worm", the first electron microscopy-based connectome of C. elegans, was an early sign of where connectomics is headed, followed by a long time of little progress in a field held back by the immense manual effort required for data acquisition and analysis. This changed over the last few years with several technological breakthroughs, which allowed increases in data set sizes by several orders of magnitude. Brain tissue can now be imaged in 3D up to a millimeter in size at nanometer resolution, revealing tissue features from synapses to the mitochondria of all contained cells. These breakthroughs in acquisition technology were paralleled by a revolution in deep-learning segmentation techniques, that equally reduced manual analysis times by several orders of magnitude, to the point where fully automated reconstructions are becoming useful. Taken together, this gives neuroscientists now access to the first wiring diagrams of thousands of automatically reconstructed neurons connected by millions of synapses, just one line of program code away. In this talk, I will cover these developments by describing the past few years' technological breakthroughs and discuss remaining challenges. Finally, I will show the potential of automated connectomics for neuroscience by demonstrating how hypotheses in reinforcement learning can now be tackled through virtual experiments in synaptic wiring diagrams of the songbird basal ganglia.
Male songbirds turn off their self-evaluation systems when they sing to females
Attending to mistakes while practicing alone provides opportunities for learning but self-evaluation during audience-directed performance could distract from ongoing execution. It remains unknown how animals switch between practice and performance modes, and how evaluation systems process errors across distinct performance contexts. We recorded from striatal-projecting dopamine (DA) neurons as male songbirds transitioned from singing alone to singing female-directed courtship song. In the presence of the female, singing-related performance error signals were reduced or gated off and DA neurons were instead phasically activated by female vocalizations. Mesostriatal DA neurons can thus dynamically change their tuning with changes in social context.
Motor Cortical Control of Vocal Interactions in a Neotropical Singing Mouse
Using sounds for social interactions is common across many taxa. Humans engaged in conversation, for example, take rapid turns to go back and forth. This ability to act upon sensory information to generate a desired motor output is a fundamental feature of animal behavior. How the brain enables such flexible sensorimotor transformations, for example during vocal interactions, is a central question in neuroscience. Seeking a rodent model to fill this niche, we are investigating neural mechanisms of vocal interaction in Alston’s singing mouse (Scotinomys teguina) – a neotropical rodent native to the cloud forests of Central America. We discovered sub-second temporal coordination of advertisement songs (counter-singing) between males of this species – a behavior that requires the rapid modification of motor outputs in response to auditory cues. We leveraged this natural behavior to probe the neural mechanisms that generate and allow fast and flexible vocal communication. Using causal manipulations, we recently showed that an orofacial motor cortical area (OMC) in this rodent is required for vocal interactions (Okobi*, Banerjee* et. al, 2019). Subsequently, in electrophysiological recordings, I find neurons in OMC that track initiation, termination and relative timing of songs. Interestingly, persistent neural dynamics during song progression stretches or compresses on every trial to match the total song duration (Banerjee et al, in preparation). These results demonstrate robust cortical control of vocal timing in a rodent and upends the current dogma that motor cortical control of vocal output is evolutionarily restricted to the primate lineage.
Neural coding in the auditory cortex - "Emergent Scientists Seminar Series
Dr Jennifer Lawlor Title: Tracking changes in complex auditory scenes along the cortical pathway Complex acoustic environments, such as a busy street, are characterised by their everchanging dynamics. Despite their complexity, listeners can readily tease apart relevant changes from irrelevant variations. This requires continuously tracking the appropriate sensory evidence while discarding noisy acoustic variations. Despite the apparent simplicity of this perceptual phenomenon, the neural basis of the extraction of relevant information in complex continuous streams for goal-directed behavior is currently not well understood. As a minimalistic model for change detection in complex auditory environments, we designed broad-range tone clouds whose first-order statistics change at a random time. Subjects (humans or ferrets) were trained to detect these changes.They were faced with the dual-task of estimating the baseline statistics and detecting a potential change in those statistics at any moment. To characterize the extraction and encoding of relevant sensory information along the cortical hierarchy, we first recorded the brain electrical activity of human subjects engaged in this task using electroencephalography. Human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. To further this investigation, we performed a series of electrophysiological recordings in the primary auditory cortex (A1), secondary auditory cortex (PEG) and frontal cortex (FC) of the fully trained behaving ferret. A1 neurons exhibited strong onset responses and change-related discharges specific to neuronal tuning. PEG population showed reduced onset-related responses, but more categorical change-related modulations. Finally, a subset of FC neurons (dlPFC/premotor) presented a generalized response to all change-related events only during behavior. We show using a Generalized Linear Model (GLM) that the same subpopulation in FC encodes sensory and decision signals, suggesting that FC neurons could operate conversion of sensory evidence to perceptual decision. All together, these area-specific responses suggest a behavior-dependent mechanism of sensory extraction and generalization of task-relevant event. Aleksandar Ivanov Title: How does the auditory system adapt to different environments: A song of echoes and adaptation
Neural control of vocal interactions in songbirds
During conversations we rapidly switch between listening and speaking which often requires withholding or delaying our speech in order to hear others and avoid overlapping. This capacity for vocal turn-taking is exhibited by non-linguistic species as well, however the neural circuit mechanisms that enable us to regulate the precise timing of our vocalizations during interactions are unknown. We aim to identify the neural mechanisms underlying the coordination of vocal interactions. Therefore, we paired zebra finches with a vocal robot (1Hz call playback) and measured the bird’s call response times. We found that individual birds called with a stereotyped delay in respect to the robot call. Pharmacological inactivation of the premotor nucleus HVC revealed its necessity for the temporal coordination of calls. We further investigated the contributing neural activity within HVC by performing intracellular recordings from premotor neurons and inhibitory interneurons in calling zebra finches. We found that inhibition is preceding excitation before and during call onset. To test whether inhibition guides call timing we pharmacologically limited the impact of inhibition on premotor neurons. As a result zebra finches converged on a similar delay time i.e. birds called more rapidly after the vocal robot call suggesting that HVC inhibitory interneurons regulate the coordination of social contact calls. In addition, we aim to investigate the vocal turn-taking capabilities of the common nightingale. Male nightingales learn over 100 different song motifs which are being used in order to attract mates or defend territories. Previously, it has been shown that nightingales counter-sing with each other following a similar temporal structure to human vocal turn-taking. These animals are also able to spontaneously imitate a motif of another nightingale. The neural mechanisms underlying this behaviour are not yet understood. In my lab, we further probe the capabilities of these animals in order to access the dynamic range of their vocal turn taking flexibility.
Neural Decoding of Temporal Features of Zebra Finch Song
Bernstein Conference 2024
A robust machine learning pipeline for the analysis of complex nightingale songs
Bernstein Conference 2024
Contextual motor learning in birdsong reflects two distinct neural processes
COSYNE 2022
Dual pathway architecture in songbirds boosts sensorimotor learning
COSYNE 2022
Engagement of the respiratory CPG for songbird vocalizations
COSYNE 2022
Flexible circuit mechanisms for context-dependent song sequencing
COSYNE 2022
Flexible circuit mechanisms for context-dependent song sequencing
COSYNE 2022
Modeling tutor-directed dynamics in zebra finch song learning
COSYNE 2022
Modeling tutor-directed dynamics in zebra finch song learning
COSYNE 2022
Temporal Dynamics in an Attractor Model of the Songbird’s Premotor Nucleus.
COSYNE 2022
Temporal Dynamics in an Attractor Model of the Songbird’s Premotor Nucleus.
COSYNE 2022
A new tool for automated annotation of complex birdsong reveals dynamics of canary syntax rules
COSYNE 2022
A new tool for automated annotation of complex birdsong reveals dynamics of canary syntax rules
COSYNE 2022
Basal ganglia-dependent expression of recent song learning in the juvenile finch
COSYNE 2023
Thalamic maintenance of a complex sequential learned behavior: birdsong
COSYNE 2023
Variable syllable context depth in Bengalese finch songs: A Bayesian sequence model
COSYNE 2023
Canary song syntax moves between order and disorder
COSYNE 2025
Decoding Temporal Features of Birdsong Through Neural Activity Analysis
COSYNE 2025
Hacking vocal learning with deep learning: flexible real-time perturbation of zebra finch song
COSYNE 2025
Non-stereotyped Neural States in Canary HVC indicate song syntax plasticity
COSYNE 2025
TweetyBERT, a self-supervised vision transformer to automate birdsong annotation
COSYNE 2025
Dynamic cortical auditory-motor neuronal projections regulate developmental song learning in zebra finches
FENS Forum 2024
Neuronal activity in avian basal ganglia-cortical loop related to birdsong acoustic variation in zebra finches
FENS Forum 2024
A novel songbird model of autism
FENS Forum 2024
A novel view on basal ganglia pathways: Insights from synaptic-resolution connectomics in songbirds
FENS Forum 2024
Resonant song recognition in crickets
FENS Forum 2024
song coverage
44 items