← Back

Timescales

Topic spotlight
TopicNeuro

timescales

Discover seminars, jobs, and research tagged with timescales across Neuro.
34 curated items34 Seminars
Updated about 1 year ago
34 items · timescales

Latest

34 results
SeminarNeuroscience

Intrinsic timescales in the visual cortex change with selective attention and reflect spatial connectivity

Attempto Prize Awardee I Roxana Zeraati
IMPRS-MMFD, MPI-BC & University of Tübingen
Oct 31, 2024
SeminarNeuroscienceRecording

Feedback control in the nervous system: from cells and circuits to behaviour

Timothy O'Leary
Department of Engineering, University of Cambridge
May 16, 2023

The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.

SeminarNeuroscience

Dynamic endocrine modulation of the nervous system

Emily Jabocs
US Santa Barbara Neuroscience
Apr 18, 2023

Sex hormones are powerful neuromodulators of learning and memory. In rodents and nonhuman primates estrogen and progesterone influence the central nervous system across a range of spatiotemporal scales. Yet, their influence on the structural and functional architecture of the human brain is largely unknown. Here, I highlight findings from a series of dense-sampling neuroimaging studies from my laboratory designed to probe the dynamic interplay between the nervous and endocrine systems. Individuals underwent brain imaging and venipuncture every 12-24 hours for 30 consecutive days. These procedures were carried out under freely cycling conditions and again under a pharmacological regimen that chronically suppresses sex hormone production. First, resting state fMRI evidence suggests that transient increases in estrogen drive robust increases in functional connectivity across the brain. Time-lagged methods from dynamical systems analysis further reveals that these transient changes in estrogen enhance within-network integration (i.e. global efficiency) in several large-scale brain networks, particularly Default Mode and Dorsal Attention Networks. Next, using high-resolution hippocampal subfield imaging, we found that intrinsic hormone fluctuations and exogenous hormone manipulations can rapidly and dynamically shape medial temporal lobe morphology. Together, these findings suggest that neuroendocrine factors influence the brain over short and protracted timescales.

SeminarNeuroscience

Intrinsic timescales in the cortex and how to find them

Anna Levina
Mar 10, 2023
SeminarNeuroscience

The evolution of computation in the brain: Insights from studying the retina

Tom Baden
University of Sussex (UK)
Jun 2, 2022

The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?

SeminarNeuroscience

Synthetic and natural images unlock the power of recurrency in primary visual cortex

Andreea Lazar
Ernst Strüngmann Institute (ESI) for Neuroscience
May 20, 2022

During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.

SeminarNeuroscienceRecording

Timescales of neural activity: their inference, control, and relevance

Anna Levina
Universität Tübingen
May 4, 2022

Timescales characterize how fast the observables change in time. In neuroscience, they can be estimated from the measured activity and can be used, for example, as a signature of the memory trace in the network. I will first discuss the inference of the timescales from the neuroscience data comprised of the short trials and introduce a new unbiased method. Then, I will apply the method to the data recorded from a local population of cortical neurons from the visual area V4. I will demonstrate that the ongoing spiking activity unfolds across at least two distinct timescales - fast and slow - and the slow timescale increases when monkeys attend to the location of the receptive field. Which models can give rise to such behavior? Random balanced networks are known for their fast timescales; thus, a change in the neurons or network properties is required to mimic the data. I will propose a set of models that can control effective timescales and demonstrate that only the model with strong recurrent interactions fits the neural data. Finally, I will discuss the timescales' relevance for behavior and cortical computations.

SeminarNeuroscienceRecording

Transcriptional adaptation couples past experience and future sensory responses

Tatsuya Tsukahara
Datta lab, Harvard Medical School
Apr 27, 2022

Animals traversing different environments encounter both stable background stimuli and novel cues, which are generally thought to be detected by primary sensory neurons and then distinguished by downstream brain circuits. Sensory adaptation is a neural mechanism that filters background by minimizing responses to stable sensory stimuli, and a fundamental feature of sensory systems. Adaptation over relatively fast timescales (milliseconds to minutes) have been reported in many sensory systems. However, adaptation to persistent environmental stimuli over longer timescales (hours to days) have been largely unexplored, even though those timescales are ethologically important since animals typically stay in one environment for hours. I showed that each of the ~1,000 olfactory sensory neuron (OSN) subtypes in the mouse harbors a distinct transcriptome whose content is precisely determined by interactions between its odorant receptor and the environment. This transcriptional variation is systematically organized to support sensory adaptation: expression levels of many genes relevant to transforming odors into spikes continuously vary across OSN subtypes, dynamically adjust to new environments over hours, and accurately predict acute OSN-specific odor responses. The sensory periphery therefore separates salient signals from predictable background via a transcriptional mechanism whose moment-to-moment state reflects the past and constrains the future; these findings suggest a general model in which structured transcriptional variation within a cell type reflects individual experience.

SeminarNeuroscienceRecording

Population coding in the cerebellum: a machine learning perspective

Reza Shadmehr
Johns Hopkins School of Medicine
Apr 6, 2022

The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.

SeminarNeuroscienceRecording

Flexible motor sequence generation by thalamic control of cortical dynamics through low-rank connectivity perturbations

Laureline Logiaco
Center for Theoretical Neuroscience, Columbia University
Mar 9, 2022

One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales to efficiently shape structured behaviors. I will present a model that clarifies how these complex computations could be performed in the mammalian brain, with an emphasis on the learning of an extendable library of autonomous motor motifs and the flexible stringing of these motifs in motor sequences. To build this model, we took advantage of the fact that the anatomy of the circuits involved is well known. Our results show how these architectural constraints lead to a principled understanding of how strategically positioned plastic connections located within motif-specific thalamocortical loops can interact with cortical dynamics that are shared across motifs to create an efficient form of modularity. This occurs because the cortical dynamics can be controlled by the activation of as few as one thalamic unit, which induces a low-rank perturbation of the cortical connectivity, and significantly expands the range of outputs that the network can produce. Finally, our results show that transitions between any motifs can be facilitated by a specific thalamic population that participates in preparing cortex for the execution of the next motif. Taken together, our model sheds light on the neural network mechanisms that can generate flexible sequencing of varied motor motifs.

SeminarNeuroscienceRecording

Theory of recurrent neural networks – from parameter inference to intrinsic timescales in spiking networks

Alexander van Meegen
Forschungszentrum Jülich
Jan 13, 2022
SeminarNeuroscienceRecording

NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks

Abdullahi Ali
Donders Institute for Brain
Dec 2, 2021

Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 2, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Keynote: Latent variable modeling of neural population dynamics - where do we go from here?

Chethan Pandarinath
Georgia Tech & Emory University
Dec 1, 2021

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present machine learning frameworks that enable inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales, from diverse brain areas, and without regard to behavior. I will then demonstrate extensions that allow recovery of dynamics from two-photon calcium imaging data with surprising precision. Finally, I will discuss our efforts to facilitate comparisons within our field by curating datasets and standardizing model evaluation, including a currently active modeling challenge, the 2021 Neural Latents Benchmark [neurallatents.github.io].

SeminarNeuroscienceRecording

Noise-induced properties of active dendrites

Farzada Farkhooi
Humboldt University Berlin
Nov 17, 2021

Neuronal dendritic trees display a wide range of nonlinear input integrations due to their voltage-dependent active calcium channels. We reveal that in vivo-like fluctuating input enhances nonlinearity substantially in a single dendritic compartment and shifts the input-output relation to exhibiting nonmonotonous or bistable dynamics. In particular, with the slow activation of calcium dynamics, we analyze noise-induced bistability and its timescales. We show bistability induces long-timescale fluctuation that can account for observed dendritic plateau potentials in vivo conditions. In a multicompartmental model neuron with realistic synaptic input, we show that noise-induced bistability persists in a wide range of parameters. Using Fredholm's theory to calculate the spiking rate of multivariable neurons, we discuss how dendritic bistability shifts the spiking dynamics of single neurons and its implications for network phenomena in the processing of in vivo–like fluctuating input.

SeminarNeuroscience

Uncertainty and Timescales of Learning and Decision Making

Daeyeol Lee
Johns Hopkins University, Baltimore, USA
Sep 6, 2021
SeminarNeuroscienceRecording

Acetylcholine modulation of short-term plasticity is critical to reliable long-term plasticity in hippocampal synapses

Rohan Sharma
Suhita lab, Indian Institute of Science Education and Research Pune
Jul 28, 2021

CA3-CA1 synapses in the hippocampus are the initial locus of episodic memory. The action of acetylcholine alters cellular excitability, modifies neuronal networks, and triggers secondary signaling that directly affects long-term plasticity (LTP) (the cellular underpinning of memory). It is therefore considered a critical regulator of learning and memory in the brain. Its action via M4 metabotropic receptors in the presynaptic terminal of the CA3 neurons and M1 metabotropic receptors in the postsynaptic spines of CA1 neurons produce rich dynamics across multiple timescales. We developed a model to describe the activation of postsynaptic M1 receptors that leads to IP3 production from membrane PIP2 molecules. The binding of IP3 to IP3 receptors in the endoplasmic reticulum (ER) ultimately causes calcium release. This calcium release from the ER activates potassium channels like the calcium-activated SK channels and alters different aspects of synaptic signaling. In an independent signaling cascade, M1 receptors also directly suppress SK channels and the voltage-activated KCNQ2/3 channels, enhancing post-synaptic excitability. In the CA3 presynaptic terminal, we model the reduction of the voltage sensitivity of voltage-gated calcium channels (VGCCs) and the resulting suppression of neurotransmitter release by the action of the M4 receptors. Our results show that the reduced initial release probability because of acetylcholine alters short-term plasticity (STP) dynamics. We characterize the dichotomy of suppressing neurotransmitter release from CA3 neurons and the enhanced excitability of the postsynaptic CA1 spine. Mechanisms underlying STP operate over a few seconds, while those responsible for LTP last for hours, and both forms of plasticity have been linked with very distinct functions in the brain. We show that the concurrent suppression of neurotransmitter release and increased sensitivity conserves neurotransmitter vesicles and enhances the reliability in plasticity. Our work establishes a relationship between STP and LTP coordinated by neuromodulation with acetylcholine.

SeminarNeuroscience

Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Neural Dynamics & Computation Lab, Stanford University
Jun 17, 2021

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

Structures in space and time - Hierarchical network dynamics in the amygdala

Yael Bitterman
Luethi lab, FMI for Biomedical Research
Jun 16, 2021

In addition to its role in the learning and expression of conditioned behavior, the amygdala has long been implicated in the regulation of persistent states, such as anxiety and drive. Yet, it is not evident what projections of the neuronal activity capture the functional role of the network across such different timescales, specifically when behavior and neuronal space are complex and high-dimensional. We applied a data-driven dynamical approach for the analysis of calcium imaging data from the basolateral amygdala, collected while mice performed complex, self-paced behaviors, including spatial exploration, free social interaction, and goal directed actions. The seemingly complex network dynamics was effectively described by a hierarchical, modular structure, that corresponded to behavior on multiple timescales. Our results describe the response of the network activity to perturbations along different dimensions and the interplay between slow, state-like representation and the fast processing of specific events and actions schemes. We suggest hierarchical dynamical models offer a unified framework to capture the involvement of the amygdala in transitions between persistent states underlying such different functions as sensory associative learning, action selection and emotional processing. * Work done in collaboration with Jan Gründemann, Sol Fustinana, Alejandro Tsai and Julien Courtin (@theLüthiLab)

SeminarNeuroscience

Causal coupling between neural activity, metabolism, and behavior across the Drosophila brain

Kevin Mann
Stanford School of Medicine
Jun 7, 2021

Coordinated activity across networks of neurons is a hallmark of both resting and active behavioral states in many species, including worms, flies, fish, mice and humans. These global patterns alter energy metabolism in the brain over seconds to hours, making oxygen consumption and glucose uptake widely used proxies of neural activity. However, whether changes in neural activity are causally related to changes in metabolic flux in intact circuits on the sub-second timescales associated with behavior, is unclear. Moreover, it is unclear whether differences between rest and action are associated with spatiotemporally structured changes in neuronal energy metabolism at the subcellular level. My work combines two-photon microscopy across the fruit fly brain with sensors that allow simultaneous measurements of neural activity and metabolic flux, across both resting and active behavioral states. It demonstrates that neural activity drives changes in metabolic flux, creating a tight coupling between these signals that can be measured across large-scale brain networks. Further, using local optogenetic perturbation, I show that even transient increases in neural activity result in rapid and persistent increases in cytosolic ATP, suggesting that neuronal metabolism predictively allocates resources to meet the energy demands of future neural activity. Finally, these studies reveal that the initiation of even minimal behavioral movements causes large-scale changes in the pattern of neural activity and energy metabolism, revealing unexpectedly widespread engagement of the central brain.

SeminarNeuroscience

Choosing, fast and slow: Implications of prioritized-sampling models for understanding automaticity and control

Cendri Hutcherson
University of Toronto
Apr 15, 2021

The idea that behavior results from a dynamic interplay between automatic and controlled processing underlies much of decision science, but has also generated considerable controversy. In this talk, I will highlight behavioral and neural data showing how recently-developed computational models of decision making can be used to shed new light on whether, when, and how decisions result from distinct processes operating at different timescales. Across diverse domains ranging from altruism to risky choice biases and self-regulation, our work suggests that a model of prioritized attentional sampling and evidence accumulation may provide an alternative explanation for many phenomena previously interpreted as supporting dual process models of choice. However, I also show how some features of the model might be taken as support for specific aspects of dual-process models, providing a way to reconcile conflicting accounts and generating new predictions and insights along the way.

SeminarNeuroscience

Neural control of motor actions: from whole-brain landscape to millisecond dynamics

Takashi Kawashima
Weizmann Institute
Apr 8, 2021

Animals control motor actions at multiple timescales. We use larval zebrafish and advanced optical microscopy to understand the underlying neural mechanisms. First, we examined the mechanisms of short-term motor learning by using whole-brain neural activity imaging. We found that the 5-HT system integrates the sensory outcome of actions and determines future motor patterns. Second, we established a method for recording spiking activity and membrane potential from a population of neurons during behavior. We identified putative motor command signals and internal copy signals that encode millisecond-scale details of the swimming dynamics. These results demonstrate that zebrafish provide a holistic and mechanistic understanding of the neural basis of motor control in vertebrate brains.

SeminarNeuroscience

Firing Homeostasis in Neural Circuits: From Basic Principles to Malfunctions

Inna Slutsky
Tel Aviv University
Feb 19, 2021

Neural circuit functions are stabilized by homeostatic mechanisms at long timescales in response to changes in experience and learning. However, we still do not know which specific physiological variables are being stabilized, nor which cellular or neural-network components comprise the homeostatic machinery. At this point, most evidence suggests that the distribution of firing rates amongst neurons in a brain circuit is the key variable that is maintained around a circuit-specific set-point value in a process called firing rate homeostasis. Here, I will discuss our recent findings that implicate mitochondria as a central player in mediating firing rate homeostasis and its impairments. While mitochondria are known to regulate neuronal variables such as synaptic vesicle release or intracellular calcium concentration, we searched for the mitochondrial signaling pathways that are essential for homeostatic regulation of firing rates. We utilize basic concepts of control theory to build a framework for classifying possible components of the homeostatic machinery in neural networks. This framework may facilitate the identification of new homeostatic pathways whose malfunctions drive instability of neural circuits in distinct brain disorders.

SeminarNeuroscienceRecording

The shared predictive roots of motor control and beat-based timing

Jonathan Cannon
MIT, USA
Feb 17, 2021

fMRI results have shown that the supplementary motor area (SMA) and the basal ganglia, most often discussed in their roles in generating action, are engaged by beat-based timing even in the absence of movement. Some have argued that the motor system is “recruited” by beat-based timing tasks due to the presence of motor-like timescales, but a deeper understanding of the roles of these motor structures is lacking. Reviewing a body of motor neurophysiology literature and drawing on the “active inference” framework, I argue that we can see the motor and timing functions of these brain areas as examples of dynamic sub-second prediction informed by sensory event timing. I hypothesize that in both cases, sub-second dynamics in SMA predict the progress of a temporal process outside the brain, and direct pathway activation in basal ganglia selects temporal and sensory predictions for the upcoming interval -- the only difference is that in motor processes, these predictions are made manifest through motor effectors. If we can unify our understanding of beat-based timing and motor control, we can draw on the substantial motor neuroscience literature to make conceptual leaps forward in the study of predictive timing and musical rhythm.

SeminarNeuroscienceRecording

The emergence and modulation of time in neural circuits and behavior

Luca Mazzucato
University of Oregon
Jan 22, 2021

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior and cortical activity. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times can be accelerated or slowed down via gain modulation, flexibly induced by neuromodulation or perturbations; and how gain modulation may control response timing in the visual cortex. Finally, we will present a new biologically plausible way to generate a reservoir of multiple timescales in cortical circuits.

SeminarNeuroscience

From oscillations to laminar responses - characterising the neural circuitry of autobiographical memories

Eleanor Maguire
Wellcome Centre for Human Neuroimaging at UCL
Dec 1, 2020

Autobiographical memories are the ghosts of our past. Through them we visit places long departed, see faces once familiar, and hear voices now silent. These, often decades-old, personal experiences can be recalled on a whim or come unbidden into our everyday consciousness. Autobiographical memories are crucial to cognition because they facilitate almost everything we do, endow us with a sense of self and underwrite our capacity for autonomy. They are often compromised by common neurological and psychiatric pathologies with devastating effects. Despite autobiographical memories being central to everyday mental life, there is no agreed model of autobiographical memory retrieval, and we lack an understanding of the neural mechanisms involved. This precludes principled interventions to manage or alleviate memory deficits, and to test the efficacy of treatment regimens. This knowledge gap exists because autobiographical memories are challenging to study – they are immersive, multi-faceted, multi-modal, can stretch over long timescales and are grounded in the real world. One missing piece of the puzzle concerns the millisecond neural dynamics of autobiographical memory retrieval. Surprisingly, there are very few magnetoencephalography (MEG) studies examining such recall, despite the important insights this could offer into the activity and interactions of key brain regions such as the hippocampus and ventromedial prefrontal cortex. In this talk I will describe a series of MEG studies aimed at uncovering the neural circuitry underpinning the recollection of autobiographical memories, and how this changes as memories age. I will end by describing our progress on leveraging an exciting new technology – optically pumped MEG (OP-MEG) which, when combined with virtual reality, offers the opportunity to examine millisecond neural responses from the whole brain, including deep structures, while participants move within a virtual environment, with the attendant head motion and vestibular inputs.

SeminarNeuroscienceRecording

The emergence and modulation of time in neural circuits and behavior

Luca Mazzucato
University of Oregon
Nov 25, 2020

Spontaneous behavior in animals and humans shows a striking amount of variability both in the spatial domain (which actions to choose) and temporal domain (when to act). Concatenating actions into sequences and behavioral plans reveals the existence of a hierarchy of timescales ranging from hundreds of milliseconds to minutes. How do multiple timescales emerge from neural circuit dynamics? How do circuits modulate temporal responses to flexibly adapt to changing demands? In this talk, we will present recent results from experiments and theory suggesting a new computational mechanism generating the temporal variability underlying naturalistic behavior. We will show how neural activity from premotor areas unfolds through temporal sequences of attractors, which predict the intention to act. These sequences naturally emerge from recurrent cortical networks, where correlated neural variability plays a crucial role in explaining the observed variability in action timing. We will then discuss how reaction times in these recurrent circuits can be accelerated or slowed down via gain modulation, induced by neuromodulation or perturbations. Finally, we will present a general mechanism producing a reservoir of multiple timescales in recurrent networks.

SeminarNeuroscienceRecording

Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Stanford University
Oct 16, 2020

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition.  However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling.  We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process.  In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

Schemas: events, spaces, semantics, and development

Chris Baldassano
Columbia University
Jul 1, 2020

Understanding and remembering realistic experiences in our everyday lives requires activating many kinds of structured knowledge about the world, including spatial maps, temporal event scripts, and semantic relationships. My recent projects have explored the ways in which we build up this schematic knowledge (during a single experiment and across developmental timescales) and can strategically deploy them to construct event representations that we can store in memory or use to make predictions. I will describe my lab's ongoing work developing new experimental and analysis techniques for conducting functional MRI experiments using narratives, movies, poetry, virtual reality, and "memory experts" to study complex naturalistic schemas.

SeminarNeuroscienceRecording

Recurrent network models of adaptive and maladaptive learning

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Apr 8, 2020

During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.

timescales coverage

34 items

Seminar34
Domain spotlight

Explore how timescales research is advancing inside Neuro.

Visit domain