Behavioral Studies
Behavioral Studies
Dr. Rudy Behnia
The Behnia Lab, is seeking to hire a Postdoctoral Research Scientist to assist with studies on how our brains see the world around us. The overarching goal of the lab is to define the processing steps that transform light signals in photoreceptors into feature of a visual scene such as color or direction of motion. For a given visual feature, we aim to describe not only the underlying mathematic operations (algorithms) that govern a transformation, but also the neural circuits that implement these. We make extensive use of connectomics data, as well as the abundant genetic tools available in fruit flies, and collaborate extensively with theorist to build biologically constrained models of perception. We are also interested in understanding how different internal/environmental states or other sensory systems influence the visual perception and how multisensory representations are used for higher cognitive functions such as learning and navigation. We invite to you review our website for more details about our work: http://behnialab.neuroscience.columbia.edu. Example projects include: 1/ A multidisciplinary collaboration with the laboratory of Ashok Litwin-Kumar at the Center for Theoretical Neuroscience, aimed at defining circuit mechanisms underlying multisensory learning, 2/ Investigating the role of neuromodulatory systems in color processing. Please contact Rudy Behnia directly at rb3161@columbia.edu for more details. The Behnia lab is part of the Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute (the Zuckerman Institute) brings together world-class researchers from varied scientific disciplines to explore aspects of mind and brain, through the exchange of ideas and active collaboration. The Zuckerman Institute’s home, the Jerome L. Greene Science Center is a state-of-the-art facility on Columbia’s Manhattanville campus. Situated in the heart of Manhattan in New York City, the Zuckerman Institute houses over 50 laboratories employing a broad range of interdisciplinary approaches to transform our understanding of the mind and brain. In this highly collaborative environment, experimental, computational, and theoretical labs work together to gain critical insights into how the brain develops, performs, endures and recovers. The Zuckerman Institute provides multiple levels of support for postdoctoral researchers (https://zuckermaninstitute.columbia.edu/postdocs). The Postdoc Program provides postdocs with an enriched research environment to advance their scientific training and support their professional growth. This includes frameworks to build a professional network of mentors and peers, through the personal board of advisors, as well as leadership opportunities, workshops and opportunities for public engagement. The Behnia lab strives to provide a supportive environment where creativity, independence, work/life balance are valued. We are strong advocates of diversity, equality, inclusion and belonging. We encourage application from applicants from diverse backgrounds. Prior experience in quantitative analysis of neuronal recordings and/or behavior and related programming skills (Python, version control, databases) are required. Prior expertise with in vivo imaging, electrophysiological recordings or behavioral studies, as well as superior motivation, drive and demonstrated aptitude for carrying out independent research are highly desirable qualifications. In addition, the ideal candidate would seek to work in a highly diverse and collaborative environment.
Dr. Rudy Behnia
The Behnia Lab, is seeking to hire a Postdoctoral Research Scientist to assist with studies on how our brains see the world around us. The overarching goal of the lab is to define the processing steps that transform light signals in photoreceptors into feature of a visual scene such as color or direction of motion. For a given visual feature, we aim to describe not only the underlying mathematic operations (algorithms) that govern a transformation, but also the neural circuits that implement these. We make extensive use of connectomics data, as well as the abundant genetic tools available in fruit flies, and collaborate extensively with theorist to build biologically constrained models of perception. We are also interested in understanding how different internal/environmental states or other sensory systems influence the visual perception and how multisensory representations are used for higher cognitive functions such as learning and navigation. We invite to you review our website for more details about our work: http://behnialab.neuroscience.columbia.edu. Example projects include: 1/ A multidisciplinary collaboration with the laboratory of Ashok Litwin-Kumar at the Center for Theoretical Neuroscience, aimed at defining circuit mechanisms underlying multisensory learning, 2/ Investigating the role of neuromodulatory systems in color processing. Please contact Rudy Behnia directly at rb3161@columbia.edu for more details. The Behnia lab is part of the Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute (the Zuckerman Institute) brings together world-class researchers from varied scientific disciplines to explore aspects of mind and brain, through the exchange of ideas and active collaboration. The Zuckerman Institute’s home, the Jerome L. Greene Science Center is a state-of-the-art facility on Columbia’s Manhattanville campus. Situated in the heart of Manhattan in New York City, the Zuckerman Institute houses over 50 laboratories employing a broad range of interdisciplinary approaches to transform our understanding of the mind and brain. In this highly collaborative environment, experimental, computational, and theoretical labs work together to gain critical insights into how the brain develops, performs, endures and recovers. The Zuckerman Institute provides multiple levels of support for postdoctoral researchers (https://zuckermaninstitute.columbia.edu/postdocs). The Postdoc Program provides postdocs with an enriched research environment to advance their scientific training and support their professional growth. This includes frameworks to build a professional network of mentors and peers, through the personal board of advisors, as well as leadership opportunities, workshops and opportunities for public engagement. The Behnia lab strives to provide a supportive environment where creativity, independence, work/life balance are valued. We are strong advocates of diversity, equality, inclusion and belonging. We encourage application from applicants from diverse backgrounds.
Connecting performance benefits on visual tasks to neural mechanisms using convolutional neural networks
Behavioral studies have demonstrated that certain task features reliably enhance classification performance for challenging visual stimuli. These include extended image presentation time and the valid cueing of attention. Here, I will show how convolutional neural networks can be used as a model of the visual system that connects neural activity changes with such performance changes. Specifically, I will discuss how different anatomical forms of recurrence can account for better classification of noisy and degraded images with extended processing time. I will then show how experimentally-observed neural activity changes associated with feature attention lead to observed performance changes on detection tasks. I will also discuss the implications these results have for how we identify the neural mechanisms and architectures important for behavior.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
Computational Principles of Event Memory
Our ability to understand ongoing events depends critically on general knowledge about how different kinds of situations work (schemas), and also on recollection of specific instances of these situations that we have previously experienced (episodic memory). The consensus around this general view masks deep questions about how these two memory systems interact to support event understanding: How do we build our library of schemas? and how exactly do we use episodic memory in the service of event understanding? Given rich, continuous inputs, when do we store and retrieve episodic memory “snapshots”, and how are they organized so as to ensure that we can retrieve the right snapshots at the right time? I will develop predictions about how these processes work using memory augmented neural networks (i.e., neural networks that learn how to use episodic memory in the service of task performance), and I will present results from relevant fMRI and behavioral studies.
The neuroscience of color and what makes primates special
Among mammals, excellent color vision has evolved only in certain non-human primates. And yet, color is often assumed to be just a low-level stimulus feature with a modest role in encoding and recognizing objects. The rationale for this dogma is compelling: object recognition is excellent in grayscale images (consider black-and-white movies, where faces, places, objects, and story are readily apparent). In my talk I will discuss experiments in which we used color as a tool to uncover an organizational plan in inferior temporal cortex (parallel, multistage processing for places, faces, colors, and objects) and a visual-stimulus functional representation in prefrontal cortex (PFC). The discovery of an extensive network of color-biased domains within IT and PFC, regions implicated in high-level object vision and executive functions, compels a re-evaluation of the role of color in behavior. I will discuss behavioral studies prompted by the neurobiology that uncover a universal principle for color categorization across languages, the first systematic study of the color statistics of objects and a chromatic mechanism by which the brain may compute animacy, and a surprising paradoxical impact of memory on face color. Taken together, my talk will put forward the argument that color is not primarily for object recognition, but rather for the assessment of the likely behavioral relevance, or meaning, of the stuff we see.
A neuronal model for learning to keep a rhythmic beat
When listening to music, we typically lock onto and move to a beat (1-6 Hz). Behavioral studies on such synchronization (Repp 2005) abound, yet the neural mechanisms remain poorly understood. Some models hypothesize an array of self-sustaining entrainable neural oscillators that resonate when forced with rhythmic stimuli (Large et al. 2010). In contrast, our formulation focuses on event time estimation and plasticity: a neuronal beat generator that adapts its intrinsic frequency and phase to match the extermal rhythm. The model quickly learns new rhythms, within a few cycles as found in human behavior. When the stimulus is removed the beat generator continues to produce the learned rhythm in accordance with a synchronization continuation task.
Emergence of long time scales in data-driven network models of zebrafish activity
How can neural networks exhibit persistent activity on time scales much larger than allowed by cellular properties? We address this question in the context of larval zebrafish, a model vertebrate that is accessible to brain-scale neuronal recording and high-throughput behavioral studies. We study in particular the dynamics of a bilaterally distributed circuit, the so-called ARTR, including hundreds neurons. ARTR exhibits slow antiphasic alternations between its left and right subpopulations, which can be modulated by the water temperature, and drive the coordinated orientation of swim bouts, thus organizing the fish spatial exploration. To elucidate the mechanism leading to the slow self-oscillation, we train a network graphical model (Ising) on neural recordings. Sampling the inferred model allows us to generate synthetic oscillatory activity, whose features correctly capture the observed dynamics. A mean-field analysis of the inferred model reveals the existence several phases; activated crossing of the barriers in between those phases controls the long time scales present in the network oscillations. We show in particular how the barrier heights and the nature of the phases vary with the water temperature.
Cholinergic regulation of learning in the olfactory system
In the olfactory system, cholinergic modulation has been associated with contrast modulation and changes in receptive fields in the olfactory bulb, as well the learning of odor associations in the olfactory cortex. Computational modeling and behavioral studies suggest that cholinergic modulation could improve sensory processing and learning while preventing pro-active interference when task demands are high. However, how sensory inputs and/or learning regulate incoming modulation has not yet been elucidated. We here use a computational model of the olfactory bulb, piriform cortex (PC) and horizontal limb of the diagonal band of Broca (HDB) to explore how olfactory learning could regulate cholinergic inputs to the system in a closed feedback loop. In our model, the novelty of an odor is reflected in firing rates and sparseness of cortical neurons in response to that odor and these firing rates can directly regulate learning in the system by modifying cholinergic inputs to the system.