Latest

SeminarNeuroscience

SYNGAP1 Natural History Study/ Multidisciplinary Clinic at Children’s Hospital Colorado

Megan Abbott, MD
Children's Hospital Colorado
Jul 17, 2024
SeminarNeuroscienceRecording

Understanding Machine Learning via Exactly Solvable Statistical Physics Models

Lenka Zdeborová
EPFL
Feb 8, 2023

The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.

SeminarNeuroscience

Don't forget the gametes: Neurodevelopmental pathogenesis starts in the sperm and egg

Jill Escher
Jill Escher is founder of the Escher Fund for Autism, which funds research on non-genetic inheritance, as well as autism-related programs. She is a member of the governing council of the Environmental Mutagenesis and Genomics Society, where she is past chair of the Germ Cell and Heritable Effects special interest group. She also serves as president of the National Council on Severe Autism and past president of Autism Society San Francisco Bay Area. A former lawyer, she and her husband are the pa
Jul 6, 2022

Proper development of the nervous system depends not only on the inherited DNA sequence, but also on proper regulation of gene expression, as controlled in part by epigenetic mechanisms present in the parental gametes. In this presentation an internationally recognized research advocate explains why researchers concerned about the origins of increasingly prevalent neurodevelopmental disorders such as autism and attention deficit hyperactivity disorder should look beyond genetics in probing the origins of dysregulated transcription of brain-related genes. The culprit for a subset of cases, she contends, may lie in the exposure history of the parents, and thus their germ cells. To illustrate how environmentally informed, nongenetic dysfunction may occur, she focuses on the example of parents' histories of exposure to common agents of modern inhalational anesthesia, a highly toxic exposure that in mammalian models has been seen to induce heritable neurodevelopmental abnormality in offspring born of exposed germline.

SeminarNeuroscienceRecording

Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems

Horacio Rotstein
New Jersey Institute of Technology
Apr 13, 2022

Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.

SeminarNeuroscience

Mapping Individual Trajectories of Structural and Cognitive Decline in Mild Cognitive Impairment

Shreya Rajagopal
Psychology, University of Michigan
Mar 25, 2022

The US has an aging population. For the first time in US history, the number of older adults is projected to outnumber that of children by 2034. This combined with the fact that the prevalence of Alzheimer's Disease increases exponentially with age makes for a worrying combination. Mild cognitive impairment (MCI) is an intermediate stage of cognitive decline between being cognitively normal and having full-blown Dementia, with every third person with MCI progressing to dementia of the Alzheimer's Type (DAT). While there is no known way to reverse symptoms once they begin, early prediction of disease can help stall its progression and help with early financial planning. While grey matter volume loss in the Hippocampus and Entorhinal Cortex (EC) are characteristic biomarkers of DAT, little is known about the rates of decrease of these volumes within individuals in MCI state across time. We used longitudinal growth curve models to map individual trajectories of volume loss in subjects with MCI. We then looked at whether these rates of volume decrease could predict progression to DAT right in the MCI stage. Finally, we evaluated whether these rates of Hippocampal and EC volume loss were correlated with individual rates of decline of episodic memory, visuospatial ability, and executive function.

SeminarNeuroscience

Adaptive bottleneck to pallium for sequence memory, path integration and mixed selectivity representation

André Longtin
University of Ottawa
Nov 10, 2021

Spike-driven adaptation involves intracellular mechanisms that are initiated by neural firing and lead to the subsequent reduction of spiking rate followed by a recovery back to baseline. We report on long (>0.5 second) recovery times from adaptation in a thalamic-like structure in weakly electric fish. This adaptation process is shown via modeling and experiment to encode in a spatially invariant manner the time intervals between event encounters, e.g. with landmarks as the animal learns the location of food. These cells also come in two varieties, ones that care only about the time since the last encounter, and others that care about the history of encounters. We discuss how the two populations can share in the task of representing sequences of events, supporting path integration and converting from ego-to-allocentric representations. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus. Finally we discuss how all the cells of this gateway to the pallium exhibit mixed selectivity of social features of their environment. The data and computational modeling further reveal that, in contrast to a long-held belief, these gymnotiform fish are endowed with a corollary discharge, albeit only for social signalling.

SeminarNeuroscienceRecording

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard
Oct 26, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.

SeminarNeuroscienceRecording

Chapter 1. Reconstructing history

Georg Striedter, Luis Puelles, Paul Cisek
Oct 6, 2021
SeminarNeuroscience

Understanding the role of prediction in sensory encoding

Jason Mattingley
Monash Biomedical Imaging
Jul 29, 2021

At any given moment the brain receives more sensory information than it can use to guide adaptive behaviour, creating the need for mechanisms that promote efficient processing of incoming sensory signals. One way in which the brain might reduce its sensory processing load is to encode successive presentations of the same stimulus in a more efficient form, a process known as neural adaptation. Conversely, when a stimulus violates an expected pattern, it should evoke an enhanced neural response. Such a scheme for sensory encoding has been formalised in predictive coding theories, which propose that recent experience establishes expectations in the brain that generate prediction errors when violated. In this webinar, Professor Jason Mattingley will discuss whether the encoding of elementary visual features is modulated when otherwise identical stimuli are expected or unexpected based upon the history of stimulus presentation. In humans, EEG was employed to measure neural activity evoked by gratings of different orientations, and multivariate forward modelling was used to determine how orientation selectivity is affected for expected versus unexpected stimuli. In mice, two-photon calcium imaging was used to quantify orientation tuning of individual neurons in the primary visual cortex to expected and unexpected gratings. Results revealed enhanced orientation tuning to unexpected visual stimuli, both at the level of whole-brain responses and for individual visual cortex neurons. Professor Mattingley will discuss the implications of these findings for predictive coding theories of sensory encoding. Professor Jason Mattingley is a Laureate Fellow and Foundation Chair in Cognitive Neuroscience at The University of Queensland. His research is directed toward understanding the brain processes that support perception, selective attention and decision-making, in health and disease.

SeminarNeuroscienceRecording

“From the Sublime to the Stomatopod: the story from beginning to nowhere near the end.”

Justin Marshall
University of Queensland
Jul 12, 2021

“Call me a marine vision scientist. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see what animals see in the watery part of the world. It is a way I have of dividing off the spectrum, and regulating circular polarisation.” Sometimes I wish I had just set out to harpoon a white whale as it would have been easier than studying stomatopod (mantis shrimp) vision. Nowhere near as much fun of course and certainly less dangerous so in this presentation I track the history of discovery and confusion that stomatopods deliver in trying to understand what the do actually see. The talk unashamedly borrows from that of Mike Bok a few weeks ago (April 13th 2021 “The Blurry Beginnings: etc” talk) as an introduction to the system (do go look at his talk again, it is beautiful!) and goes both backwards and forwards in time, trying to provide an explanation for the design of this visual system. The journey is again one of retinal anatomy and physiology, neuroanatomy, electrophysiology, behaviour and body ornaments but this time focusses more on polarisation vision (Mike covered the colour stuff well). There is a comparative section looking at the cephalopods too and by the end, I hope you will understand where we are at with trying to understand this extraordinary way of seeing the world and why we ‘pod-people’ wave our arms around so much when asked to explain; what do stomatopods see? Maybe, to butcher another quote: “mantis shrimp have been rendered visually beautiful for vision’s sake.”

SeminarNeuroscience

As soon as there was life there was danger

Joseph LeDoux
New York University
Jun 30, 2021

Organisms face challenges to survival throughout life. When we freeze or flee in danger, we often feel fear. Tracing the deep history of danger gives a different perspective. The first cells living billions of years ago had to detect and respond to danger in order to survive. Life is about not being dead, and behavior is a major way that organisms hold death off. Although behavior does not require a nervous system, complex organisms have brain circuits for detecting and responding to danger, the deep roots of which go back to the first cells. But these circuits do not make fear, and fear is not the cause of why we freeze or flee. Fear a human invention; a construct we use to account for what happens in our minds when we become aware that we are in harm’s way. This requires a brain that can personally know that it existed in the past, that it is the entity that might be harmed in the present, and that it will cease to exist it the future. If other animals have conscious experiences, they cannot have the kinds of conscious experiences we have because they do not have the kinds of brains we have. This is not meant as a denial of animal consciousness; it is simply a statement about the fact that every species has a different brain. Nor is it a declaration about the wonders of the human brain, since we have done some wonderful, but also horrific, things with our brains. In fact, we are on the way to a climatic disaster that will not, as some suggest, destroy the Earth. But it will make it inhabitable for our kind, and other organisms with high energy demands. Bacteria have made it for billions of years and will likely be fine. The rest is up for grabs, and, in a very real sense, up to us.

SeminarNeuroscienceRecording

The history, future and ethics of self-experimentation

Dr and Professor (respectively)
NeurotechEU
Jun 4, 2021

Modern day “neurohackers” are radically self-experimenting, attempting genomic modification with CRISPR-Cas9 constructs and electrode insertion into their cortex amongst a host of other things. Institutions wanting to avoid the risks bought on by these procedures, generally avoid involvement with self-experimenting research. Modern day “neurohackers” are radically self-experimenting, attempting genomic modification with CRISPR-Cas9 constructs and electrode insertion into their cortex amongst a host of other things. Institutions wanting to avoid the risks bought on by these procedures, generally avoid involvement with self-experimenting research. But what is the ethical thing to do? Should researchers be allowed or encouraged to self-experiment? Should institutions support or hinder them? Where do you think that this process of self-experimentation could take us? This presentation by Dr Matt Lennon and Professor Zoltan Molnar of the University of Oxford, will explore the history, future and ethics of self-experimentation. It will explore notable examples of self-experimenters including Isaac Newton, Angelo Ruffini and Oliver Sacks and how a number of these pivotal experiments created paradigm shifts in neuroscience. The presentation will open up a forum for all participants to be involved asking key ethical questions around what should and should not be allowed in self-experimentation research.

SeminarNeuroscienceRecording

Dr Lindsay reads from "Models of the Mind : How Physics, Engineering and Mathematics Shaped Our Understanding of the Brain" 📖

Grace Lindsay
Gatsby Unit for Computational Neuroscience
May 10, 2021

Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics? — The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories. — At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience. — Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.

SeminarNeuroscienceRecording

Recurrent network dynamics lead to interference in sequential learning

Friedrich Schuessler
Barak lab, Technion, Haifa, Israel
Apr 29, 2021

Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.

SeminarNeuroscience

The anterior insular cortex in the rat exerts an inhibitory influence over the loss of control of heroin intake and subsequent propensity to relapse

Dhaval Joshi
University of Cambridge, Department of Psychology
Mar 3, 2021

The anterior insular cortex (AIC) has been implicated in addictive behaviour, including the loss of control over drug intake, craving and the propensity to relapse. Evidence suggests that the influence of the AIC on drug-related behaviours is complex as in rats exposed to extended access to cocaine self-administration, the AIC was shown to exert a state-dependent, bidirectional influence on the development and expression of loss of control over drug intake, facilitating the latter but impairing the former. However, it is unclear whether this influence of the AIC is confined to stimulant drugs that have marked peripheral sympathomimetic and anxiogenic effects or whether it extends to other addictive drugs, such as opiates, that lack overt acute aversive peripheral effects. We investigated in outbred rats the effects of bilateral excitotoxic lesions of AIC induced both prior to or after long-term exposure to extended access heroin self-administration, on the development and maintenance of escalated heroin intake and the subsequent vulnerability to relapse following abstinence. Compared to sham surgeries, pre-exposure AIC lesions had no effect on the development of loss of control over heroin intake, but lesions made after a history of escalated heroin intake potentiated escalation and also enhanced responding at relapse. These data show that the AIC inhibits or limits the loss of control over heroin intake and propensity to relapse, in marked contrast to its influence on the loss of control over cocaine intake.

SeminarNeuroscienceRecording

Rare Disease Natural History Studies: Experience from the GNAO1 Natural History study in a pre and postpandemic world

Amy R. Viehoever
Washington University, Saint Louis, USA
Feb 9, 2021
SeminarNeuroscience

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard Medical School
Feb 4, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.

SeminarNeuroscienceRecording

The idea of the brain

Matthew Cobb
University of Manchester
Dec 17, 2020
SeminarNeuroscienceRecording

More than mere association: Are some figure-ground organisation processes mediated by perceptual grouping mechanisms?

Joseph Brooks
Keele University
Dec 8, 2020

Figure-ground organisation and perceptual grouping are classic topics in Gestalt and perceptual psychology. They often appear alongside one another in introductory textbook chapters on perception and have a long history of investigation. However, they are typically discussed as separate processes of perceptual organisation with their own distinct phenomena and mechanisms. Here, I will propose that perceptual grouping and figure-ground organisation are strongly linked. In particular, perceptual grouping can provide a basis for, and may share mechanisms with, a wide range of figure-ground principles. To support this claim, I will describe a new class of figure-ground principles based on perceptual grouping between edges and demonstrate that this inter-edge grouping (IEG) is a powerful influence on figure-ground organisation. I will also draw support from our other results showing that grouping between edges and regions (i.e., edge-region grouping) can affect figure-ground organisation (Palmer & Brooks, 2008) and that contextual influences in figure-ground organisation can be gated by perceptual grouping between edges (Brooks & Driver, 2010). In addition to these modern observations, I will also argue that we can describe some classic figure-ground principles (e.g., symmetry, convexity, etc.) using perceptual grouping mechanisms. These results suggest that figure-ground organisation and perceptual grouping have more than a mere association under the umbrella topics of Gestalt psychology and perceptual organisation. Instead, perceptual grouping may provide a mechanism underlying a broad class of new and extant figure-ground principles.

SeminarNeuroscienceRecording

Cognition plus longevity equals culture: A new framework for understanding human brain evolution

Suzana Herculano-Houzel
Vanderbilt University
Dec 4, 2020

Narratives of human evolution have focused on cortical expansion and increases in brain size relative to body size, but considered that changes in life history, such as in age at sexual maturity and thus the extent of childhood and maternal dependence, or maximal longevity, are evolved features that appeared as consequences of selection for increased brain size, or increased cognitive abilities that decrease mortality rates, or due to selection for grandmotherly contribution to feeding the young. Here I build on my recent finding that slower life histories universally accompany increased numbers of cortical neurons across warm-blooded species to propose a simpler framework for human evolution: that slower development to sexual maturity and increased post-maturity longevity are features that do not require selection, but rather inevitably and immediately accompany evolutionary increases in numbers of cortical neurons, thus fostering human social interactions and cultural and technological evolution as generational overlap increases.

SeminarNeuroscienceRecording

A conversation with Gerald Westheimer about the history and future of visual neuroscience with a retinal perspective

Gerald Westheimer
UC Berkeley
Nov 6, 2020
SeminarNeuroscienceRecording

Childhood as a solution to explore-exploit tensions

Alison Gopnik
UC Berkeley
Oct 30, 2020

I argue that the evolution of our life history, with its distinctively long, protected human childhood allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore-exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of new empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.

SeminarNeuroscienceRecording

Dynamic computation in the retina by retuning of neurons and synapses

Leon Lagnado
University of Sussex
Sep 16, 2020

How does a circuit of neurons process sensory information? And how are transformations of neural signals altered by changes in synaptic strength? We investigate these questions in the context of the visual system and the lateral line of fish. A distinguishing feature of our approach is the imaging of activity across populations of synapses – the fundamental elements of signal transfer within all brain circuits. A guiding hypothesis is that the plasticity of neurotransmission plays a major part in controlling the input-output relation of sensory circuits, regulating the tuning and sensitivity of neurons to allow adaptation or sensitization to particular features of the input. Sensory systems continuously adjust their input-output relation according to the recent history of the stimulus. A common alteration is a decrease in the gain of the response to a constant feature of the input, termed adaptation. For instance, in the retina, many of the ganglion cells (RGCs) providing the output produce their strongest responses just after the temporal contrast of the stimulus increases, but the response declines if this input is maintained. The advantage of adaptation is that it prevents saturation of the response to strong stimuli and allows for continued signaling of future increases in stimulus strength. But adaptation comes at a cost: a reduced sensitivity to a future decrease in stimulus strength. The retina compensates for this loss of information through an intriguing strategy: while some RGCs adapt following a strong stimulus, a second population gradually becomes sensitized. We found that the underlying circuit mechanisms involve two opposing forms of synaptic plasticity in bipolar cells: synaptic depression causes adaptation and facilitation causes sensitization. Facilitation is in turn caused by depression in inhibitory synapses providing negative feedback. These opposing forms of plasticity can cause simultaneous increases and decreases in contrast-sensitivity of different RGCs, which suggests a general framework for understanding the function of sensory circuits: plasticity of both excitatory and inhibitory synapses control dynamic changes in tuning and gain.

SeminarNeuroscience

Reward foraging task, and model-based analysis reveal how fruit flies learn the value of available options

Duda Kvitsiani
Aarhus University
Jul 29, 2020

Understanding what drives foraging decisions in animals requires careful manipulation of the value of available options while monitoring animal choices. Value-based decision-making tasks, in combination with formal learning models, have provided both an experimental and theoretical framework to study foraging decisions in lab settings. While these approaches were successfully used in the past to understand what drives choices in mammals, very little work has been done on fruit flies. This is even though fruit flies have served as a model organism for many complex behavioural paradigms. To fill this gap we developed a single-animal, trial-based decision-making task, where freely walking flies experienced optogenetic sugar-receptor neuron stimulation. We controlled the value of available options by manipulating the probabilities of optogenetic stimulation. We show that flies integrate a reward history of chosen options and forget value of unchosen options. We further discover that flies assign higher values to rewards experienced early in the behavioural session, consistent with formal reinforcement learning models. Finally, we show that the probabilistic rewards affect walking trajectories of flies, suggesting that accumulated value is controlling the navigation vector of flies in a graded fashion. These findings establish the fruit fly as a model organism to explore the genetic and circuit basis of value-based decisions.

SeminarNeuroscience

Exploring the Genetics of Parkinson's Disease: Past, Present, and Future

Andrew Singleton
National Institute on Aging
Jul 28, 2020

In this talk, Dr Singleton will discuss the progress made so far in understanding the genetic basis of Parkinson’s disease. He will cover the history of discovery from the first identification of disease causing mutations to the state of knowledge in the field today, more that 20 years after that initial discovery. He will then discuss current initiatives and the promise of these for informing the understanding and treatment of Parkinson’s disease. Lastly, Dr Singleton will talk about current gaps in research and knowledge and working together to fill these.

SeminarNeuroscienceRecording

Understanding machine learning via exactly solvable statistical physics models

Lenka Zdeborová
CNRS & CEA Saclay
Jun 24, 2020

The affinity between statistical physics and machine learning has long history, this is reflected even in the machine learning terminology that is in part adopted from physics. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the learning algorithm.

ePosterNeuroscience

Sensory priors, and choice and outcome history in service of optimal behaviour in noisy environments

Elena Menichini, Victor Pedrosa, Quentin Pajot-Moric, Viktor Plattner, Liang Zhou, Peter Latham, Athena Akrami

COSYNE 2023

ePosterNeuroscience

Inverse reinforcement learning with switching rewards and history dependency for studying behaviors

Jingyang Ke, Feiyang Wu, Jiyi Wang, Jeffrey Markowitz, Anqi Wu

COSYNE 2025

ePosterNeuroscience

Kinematic data predict risk of future falls in patients with Parkinson’s disease without a history of falls: A five-year prospective study

Max Brzezicki, Charalampos Sotirakis, Niall Conway, James J FitzGerald, Chrystalina Antoniades

FENS Forum 2024

history coverage

43 items

Seminar40
ePoster3
Domain spotlight

Explore how history research is advancing inside Neuro.

Visit domain