Topic spotlight
TopicWorld Wide

Nyu

Discover seminars, jobs, and research tagged with Nyu across World Wide.
33 curated items31 Seminars2 Positions
Updated about 20 hours ago
33 items · Nyu
33 results
PositionNeuroscience

Eero Simoncelli, Ph.D.

Flatiron Institute, NYU
New York City, USA
Dec 5, 2025

The Center for Neural Science at New York University (NYU), jointly with the Center for Computational Neuroscience (CCN) at the Flatiron Institute of the Simons Foundation, invites applications for an open rank joint position, with a preference for junior or mid-career candidates. We seek exceptional candidates that use computational frameworks to develop concepts, models, and tools for understanding brain function. Areas of interest include sensory representation and perception, memory, decision-making, adaptation and learning, and motor control. A Ph.D. in a relevant field, such as neuroscience, engineering, physics or applied mathematics, is required. Review of applications will begin 28 March 2021. Further information: * Joint position: https://apply.interfolio.com/83845 * NYU Center for Neural Science: https://www.cns.nyu.edu/ * Flatiron Institute Center for Computational Neuroscience: https://www.simonsfoundation.org/flatiron/center-for-computational-neuroscience/

SeminarNeuroscience

The multi-phase plasticity supporting winner effect

Dayu Lin
NYU Neuroscience Institute, New York, USA
May 14, 2024

Aggression is an innate behavior across animal species. It is essential for competing for food, defending territory, securing mates, and protecting families and oneself. Since initiating an attack requires no explicit learning, the neural circuit underlying aggression is believed to be genetically and developmentally hardwired. Despite being innate, aggression is highly plastic. It is influenced by a wide variety of experiences, particularly winning and losing previous encounters. Numerous studies have shown that winning leads to an increased tendency to fight while losing leads to flight in future encounters. In the talk, I will present our recent findings regarding the neural mechanisms underlying the behavioral changes caused by winning.

SeminarNeuroscience

A recurrent network model of planning predicts hippocampal replay and human behavior

Marcelo Mattar
NYU
Oct 19, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.

SeminarNeuroscience

Learning through the eyes and ears of a child

Brenden Lake
NYU
Apr 20, 2023

Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.

SeminarNeuroscienceRecording

Private oxytocin supply and its receptors in the hypothalamus for social avoidance learning

Takuya Osakada
NYU
Jan 30, 2023

Many animals live in complex social groups. To survive, it is essential to know who to avoid and who to interact. Although naïve mice are naturally attracted to any adult conspecifics, a single defeat experience could elicit social avoidance towards the aggressor for days. The neural mechanisms underlying the behavior switch from social approach to social avoidance remains incompletely understood. Here, we identify oxytocin neurons in the retrochiasmatic supraoptic nucleus (SOROXT) and oxytocin receptor (OXTR) expressing cells in the anterior subdivision of ventromedial hypothalamus, ventrolateral part (aVMHvlOXTR) as a key circuit motif for defeat-induced social avoidance learning. After defeat, aVMHvlOXTR cells drastically increase their responses to aggressor cues. This response change is functionally important as optogenetic activation of aVMHvlOXTR cells elicits time-locked social avoidance towards a benign social target whereas inactivating the cells suppresses defeat-induced social avoidance. Furthermore, OXTR in the aVMHvl is itself essential for the behavior change. Knocking out OXTR in the aVMHvl or antagonizing the receptor during defeat, but not during post-defeat social interaction, impairs defeat-induced social avoidance. aVMHvlOXTR receives its private supply of oxytocin from SOROXT cells. SOROXT is highly activated by the noxious somatosensory inputs associated with defeat. Oxytocin released from SOROXT depolarizes aVMHvlOXTR cells and facilitates their synaptic potentiation, and hence, increases aVMHvlOXTR cell responses to aggressor cues. Ablating SOROXT cells impairs defeat-induced social avoidance learning whereas activating the cells promotes social avoidance after a subthreshold defeat experience. Altogether, our study reveals an essential role of SOROXT-aVMHvlOXTR circuit in defeat-induced social learning and highlights the importance of hypothalamic oxytocin system in social ranking and its plasticity.

SeminarNeuroscience

Love, death, and oxytocin: the challenges of mouse maternal care

Robert C. Froemke
Departments of Otolaryngology, Neuroscience & Physiology, Neuroscience Institute, Pain Research Center, NYU Grossman School of Medicine, USA
Jan 25, 2023
SeminarNeuroscience

Analyzing artificial neural networks to understand the brain

Grace Lindsay
NYU
Dec 15, 2022

In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.

SeminarNeuroscienceRecording

Connecting performance benefits on visual tasks to neural mechanisms using convolutional neural networks

Grace Lindsay
New York University (NYU)
Dec 6, 2022

Behavioral studies have demonstrated that certain task features reliably enhance classification performance for challenging visual stimuli. These include extended image presentation time and the valid cueing of attention. Here, I will show how convolutional neural networks can be used as a model of the visual system that connects neural activity changes with such performance changes. Specifically, I will discuss how different anatomical forms of recurrence can account for better classification of noisy and degraded images with extended processing time. I will then show how experimentally-observed neural activity changes associated with feature attention lead to observed performance changes on detection tasks. I will also discuss the implications these results have for how we identify the neural mechanisms and architectures important for behavior.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscienceRecording

Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Kevin Berlemont
Wang Lab, NYU Center for Neural Science
Sep 20, 2022

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

SeminarNeuroscience

Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties

SueYeon Chung
NYU/Flatiron
Sep 15, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

SeminarNeuroscienceRecording

Extrinsic control and intrinsic computation in the hippocampal CA1 network

Ipshita Zutshi
Buzsáki Lab, NYU
Jul 5, 2022

A key issue in understanding circuit operations is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Several studies have lesioned or silenced inputs to area CA1 of the hippocampus - either area CA3 or the entorhinal cortex and examined the effect on CA1 pyramidal cells. However, the types of the reported physiological impairments vary widely, primarily because simultaneous manipulations of these redundant inputs have never been performed. In this study, I combined optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of CA3. I combined this with high spatial resolution extracellular recordings along the CA1-dentate axis. Silencing the medial entorhinal largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields. In contrast to these results, unilateral mEC manipulations that were ineffective in impacting place cells during awake behavior were found to alter sharp-wave ripple sequences activated during sleep. Thus, intrinsic excitatory-inhibitory circuits within CA1 can generate neuronal assemblies in the absence of external inputs, although external synaptic inputs are critical to reconfigure (remap) neuronal assemblies in a brain-state dependent manner.

SeminarNeuroscience

Attention in Psychology, Neuroscience, and Machine Learning

Grace Lindsay
NYU
Jun 14, 2022
SeminarNeuroscience

Extrinsic control and autonomous computation in the hippocampal CA1 circuit

Ipshita Zutshi
NYU
Apr 26, 2022

In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.

SeminarNeuroscience

How Attention Shapes Perception

Marisa Carrasco
NYU
Apr 25, 2022
SeminarNeuroscienceRecording

The Learning Salon

György Buzsáki
NYU
Jan 27, 2022

In the Learning Salon, we will discuss the similarities and differences between biological and machine learning, including individuals with diverse perspectives and backgrounds, so we can all learn from one another.

SeminarNeuroscienceRecording

Structure, Function, and Learning in Distributed Neuronal Networks

SueYeon Chung
Flatiron Institute/NYU
Jan 25, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.

SeminarNeuroscience

The bounded rationality of probability distortion

Laurence T Maloney
NYU
Nov 9, 2021

In decision-making under risk (DMR) participants' choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information? We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and (3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.

SeminarNeuroscience

The generation of neural diversity

Claude Desplan
New York University
May 6, 2021

Claude Desplan is a Silver Professor of Biology and Neuroscience at NYU. He was born in Algeria and was trained at Ecole Normale Supérieure St. Cloud, France. He received his DSc at INSERM in Paris in 1983 and joined Pat O’Farrell at UCSF as a postdoc. There he demonstrated that the homeodomain, a conserved signature of many developmental genes, is a DNA binding motif. Currently, Dr. Desplan works at NYU where he investigates the generation of neural diversity using the Drosophila visual system.

SeminarNeuroscienceRecording

A metabolic function of the hippocampal sharp wave-ripple

David Tingley
Buzsaki lab, NYU Neuroscience Institute
Apr 20, 2021

The hippocampal formation has been implicated in both cognitive functions as well as the sensing and control of endocrine states. To identify a candidate activity pattern which may link such disparate functions, we simultaneously measured electrophysiological activity from the hippocampus and interstitial glucose concentrations in the body of freely behaving rats. We found that clusters of sharp wave-ripples (SPW-Rs) recorded from both dorsal and ventral hippocampus reliably predicted a decrease in peripheral glucose concentrations within ~10 minutes. This correlation was less dependent on circadian, ultradian, and meal-triggered fluctuations, it could be mimicked with optogenetically induced ripples, and was attenuated by pharmacogenetically suppressing activity of the lateral septum, the major conduit between the hippocampus and subcortical structures. Our findings demonstrate that a novel function of the SPW-R is to modulate peripheral glucose homeostasis and offer a mechanism for the link between sleep disruption and blood glucose dysregulation seen in type 2 diabetes and obesity.

SeminarNeuroscienceRecording

Hebbian learning, its inference, and brain oscillation

Sukbin Lim
NYU Shanghai
Mar 23, 2021

Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.

SeminarNeuroscienceRecording

TA domain-general dynamic framework for social perception

Jon Freeman
NYU
Mar 11, 2021

Initial social perceptions are often thought to reflect direct “read outs” of facial features. Instead, we outline a perspective whereby initial perceptions emerge from an automatic yet gradual process of negotiation between the perceptual cues inherent to a person (e.g., facial cues) and top-down social cognitive processes harbored within perceivers. This perspective argues that perceivers’ social-conceptual knowledge in particular can have a fundamental structuring role in perceptions, and thus how we think about social groups, emotions, or personality traits helps determine how we visually perceive them in other people. Integrative evidence from real-time behavioral paradigms (e.g., mouse-tracking), multivariate fMRI, and computational modeling will be discussed. Together, this work shows that the way we use facial cues to categorize other people into social groups (e.g., gender, race), perceive their emotion (e.g., anger), or infer their personality (e.g., trustworthiness) are all fundamentally shaped by prior social-conceptual knowledge and stereotypical assumptions. We find that these top-down impacts on initial perceptions are driven by the interplay of higher-order prefrontal regions involved in top-down predictions and lower-level fusiform regions involved in face processing. We argue that the perception of social categories, emotions, and traits from faces can all be conceived as resulting from an integrated system relying on domain-general cognitive properties. In this system, both visual and social cognitive processes are in a close exchange, and initial social perceptions emerge in part out of the structure of social-conceptual knowledge.

SeminarNeuroscienceRecording

Human color perception and double-opponent cells in V1 cortex

Bob Shapley
NYU
Feb 8, 2021
SeminarPhysics of LifeRecording

Untitled Seminar

Christine Vogel
NYU
Jan 18, 2021
SeminarNeuroscienceRecording

Ways to think about the brain

Gyorgy Buzsaki
NYU Neuroscience Institute, Langone Medical Center
Dec 16, 2020

Historically, research on the brain has been working its way in from the outside world, hoping that such systematic exploration will take us some day to the middle and on through the middle to the output. Ever since the time of Aristotle, philosophers and scientists have assumed that the brain (or, more precisely, the mind) is initially a blank slate filled up gradually with experience in an outside-in manner. An alternative, brain-centric view, the one I am promoting, is that self-organized brain networks induce a vast repertoire of preformed neuronal patterns. While interacting with the world, some of these initially ‘nonsensical’ patterns acquire behavioral significance or meaning. Thus, experience is primarily a process of matching preexisting neuronal dynamics to events in the world. I suggest that perpetually active, internal dynamic is the source of cognition, a neuronal operation disengaged from immediate senses.

SeminarNeuroscienceRecording

Retrieval spikes: a dendritic mechanism for retrieval-dependent memory consolidation

Erez Geron
NYU
Dec 15, 2020
SeminarNeuroscience

Neural mechanisms of aggression

Dayu Lin
NYU
Dec 1, 2020

Aggression is an innate social behavior essential for competing for resources, securing mates, defending territory and protecting the safety of oneself and family. In the last decade, significant progress has been made towards an understanding of the neural circuit underlying aggression using a set of modern neuroscience tools. Here, I will talk about the history and recent progress in the study of aggression.

SeminarNeuroscienceRecording

The 3 Cs: Collaborating to Crack Consciousness

Lucia Melloni
NYU
Nov 12, 2020

Every day when we fall asleep we lose consciousness, we are not there. And then, every morning, when we wake up, we regain it. What mechanisms give rise to consciousness, and how can we explain consciousness in the realm of the physical world of atoms and matter? For centuries, philosophers and scientists have aimed to crack this mystery. Much progress has been made in the past decades to understand how consciousness is instantiated in the brain, yet critical questions remain: can we develop a consciousness meter? Are computers conscious? What about other animals and babies? We have embarked in a large-scale, multicenter project to test, in the context of an open science, adversarial collaboration, two of the most prominent theories: Integrated information theory (IIT) and Global Neuronal Workspace (GNW) theory. We are collecting over 500 datasets including invasive and non-invasive recordings of the human brain, i.e.. fMRI, MEG and ECoG. We hope this project will enable theory-driven discoveries and further explorations that will help us better understand how consciousness fits inside the human brain.

SeminarNeuroscience

Plasticity in hypothalamic circuits for oxytocin release

Silvana Valtcheva
NYU
Oct 20, 2020

Mammalian babies are “sensory traps” for parents. Various sensory cues from the newborn are tremendously efficient in triggering parental responses in caregivers. We recently showed that core aspects of maternal behavior such as pup retrieval in response to infant vocalizations rely on active learning of auditory cues from pups facilitated by the neurohormone oxytocin (OT). Release of OT from the hypothalamus might thus help induce recognition of different infant cues but it is unknown what sensory stimuli can activate OT neurons. I performed unprecedented in vivo whole-cell and cell-attached recordings from optically-identified OT neurons in awake dams. I found that OT neurons, but not other hypothalamic cells, increased their firing rate after playback of pup distress vocalizations. Using anatomical tracing approaches and channelrhodopsin-assisted circuit mapping, I identified the projections and brain areas (including inferior colliculus, auditory cortex, and posterior intralaminar thalamus) relaying auditory information about social sounds to OT neurons. In hypothalamic brain slices, when optogenetically stimulating thalamic afferences to mimic high-frequency thalamic discharge, observed in vivo during pup calls playback, I found that thalamic activity led to long-term depression of synaptic inhibition in OT neurons. This was mediated by postsynaptic NMDARs-induced internalization of GABAARs. Therefore, persistent activation of OT neurons following pup calls in vivo is likely mediated by disinhibition. This gain modulation of OT neurons by infant cries, may be important for sustaining motivation. Using a genetically-encoded OT sensor, I demonstrated that pup calls were efficient in triggering OT release in downstream motivational areas. When thalamus projections to hypothalamus were inhibited with chemogenetics, dams exhibited longer latencies to retrieve crying pups, suggesting that the thalamus-hypothalamus noncanonical auditory pathway may be a specific circuit for the detection of social sounds, important for disinhibiting OT neurons, gating OT release in downstream brain areas, and speeding up maternal behavior.

SeminarNeuroscience

Motor Cortical Control of Vocal Interactions in a Neotropical Singing Mouse

Arkarup Banerjee
NYU Langone medical center
Sep 8, 2020

Using sounds for social interactions is common across many taxa. Humans engaged in conversation, for example, take rapid turns to go back and forth. This ability to act upon sensory information to generate a desired motor output is a fundamental feature of animal behavior. How the brain enables such flexible sensorimotor transformations, for example during vocal interactions, is a central question in neuroscience. Seeking a rodent model to fill this niche, we are investigating neural mechanisms of vocal interaction in Alston’s singing mouse (Scotinomys teguina) – a neotropical rodent native to the cloud forests of Central America. We discovered sub-second temporal coordination of advertisement songs (counter-singing) between males of this species – a behavior that requires the rapid modification of motor outputs in response to auditory cues. We leveraged this natural behavior to probe the neural mechanisms that generate and allow fast and flexible vocal communication. Using causal manipulations, we recently showed that an orofacial motor cortical area (OMC) in this rodent is required for vocal interactions (Okobi*, Banerjee* et. al, 2019). Subsequently, in electrophysiological recordings, I find neurons in OMC that track initiation, termination and relative timing of songs. Interestingly, persistent neural dynamics during song progression stretches or compresses on every trial to match the total song duration (Banerjee et al, in preparation). These results demonstrate robust cortical control of vocal timing in a rodent and upends the current dogma that motor cortical control of vocal output is evolutionarily restricted to the primate lineage.

SeminarNeuroscienceRecording

Brain mechanisms of visual form perception

Tony Movshon
NYU
Aug 31, 2020
SeminarPhysics of LifeRecording

Untitled Seminar

Daniel Cohen, Yelena Bernadskaya
Princeton, NYU
Aug 17, 2020