history
Latest
Principles of Cognitive Control over Task Focus and Task
2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).
SYNGAP1 Natural History Study/ Multidisciplinary Clinic at Children’s Hospital Colorado
The embodied brain
Understanding the brain is not only intrinsically fascinating, but also highly relevant to increase our well-being since our brain exhibits a power over the body that makes it capable both of provoking illness or facilitating the healing process. Bearing in mind this dark force, brain sciences have undergone and will undergo an important revolution, redefining its boundaries beyond the cranial cavity. During this presentation, we will discuss about the communication between the brain and other systems that shapes how we feel the external word and how we think. We are starting to unravel how our organs talk to the brain and how the brain talks back. That two-way communication encompasses a complex, body-wide system of nerves, hormones and other signals that will be discussed. This presentation aims at challenging a long history of thinking of bodily regulation as separate from "higher" mental processes. Four centuries ago, René Descartes famously conceptualized the mind as being separate from the body, it is time now to embody our mind.
Understanding Machine Learning via Exactly Solvable Statistical Physics Models
The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.
The embodied brain
Understanding the brain is not only intrinsically fascinating, but also highly relevant to increase our well-being since our brain exhibits a power over the body that makes it capable both of provoking illness or facilitating the healing process. Bearing in mind this dark force, brain sciences have undergone and will undergo an important revolution, redefining its boundaries beyond the cranial cavity. During this presentation, we will discuss about the communication between the brain and other systems that shapes how we feel the external word and how we think. We are starting to unravel how our organs talk to the brain and how the brain talks back. That two-way communication encompasses a complex, body-wide system of nerves, hormones and other signals that will be discussed. This presentation aims at challenging a long history of thinking of bodily regulation as separate from "higher" mental processes. Four centuries ago, René Descartes famously conceptualized the mind as being separate from the body, it is time now to embody our mind.
The brain: A coincidence detector between sensory experiences and internal milieu
Understanding the brain is not only intrinsically fascinating, but also highly relevant to increase our well-being since our brain exhibits a power over the body that makes it capable both of provoking illness or facilitating the healing process. Bearing in mind this dark force, brain sciences have undergone and will undergo an important revolution, redefining its boundaries beyond the cranial cavity. During this presentation, we will discuss about the communication between the brain and other systems that shapes how we feel the external word and how we think. We are starting to unravel how our organs talk to the brain and how the brain talks back. That two-way communication encompasses a complex, bodywide system of nerves, hormones and other signals that we will discussed. This presentation aims at challenging a long history of thinking of bodily regulation as separate from "higher" mental processes. Four centuries ago, René Descartes famously conceptualized the mind as being separate from the body, it is time now to embody our mind.
Don't forget the gametes: Neurodevelopmental pathogenesis starts in the sperm and egg
Proper development of the nervous system depends not only on the inherited DNA sequence, but also on proper regulation of gene expression, as controlled in part by epigenetic mechanisms present in the parental gametes. In this presentation an internationally recognized research advocate explains why researchers concerned about the origins of increasingly prevalent neurodevelopmental disorders such as autism and attention deficit hyperactivity disorder should look beyond genetics in probing the origins of dysregulated transcription of brain-related genes. The culprit for a subset of cases, she contends, may lie in the exposure history of the parents, and thus their germ cells. To illustrate how environmentally informed, nongenetic dysfunction may occur, she focuses on the example of parents' histories of exposure to common agents of modern inhalational anesthesia, a highly toxic exposure that in mammalian models has been seen to induce heritable neurodevelopmental abnormality in offspring born of exposed germline.
Untitled Seminar
G. Quattrocolo: Cajal-Retzius cells in the postnatal hippocampus; F. Garcia-Moreno: Mosaic evolutionary history of brain circuits through the lens of neurogenesis
Four questions about brain and behaviour
Tinbergen encouraged ethologists to address animal behaviour by answering four questions, covering physiology, adaptation, phylogeny, and development. This broad approach has implications for neuroscience and psychology, yet, questions about phylogeny are rarely considered in these fields. Here I describe how phylogeny can shed light on our understanding of brain structure and function. Further, I show that we now have or are developing the data and analytical methods necessary to study the natural history of the human mind.
Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems
Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.
Mapping Individual Trajectories of Structural and Cognitive Decline in Mild Cognitive Impairment
The US has an aging population. For the first time in US history, the number of older adults is projected to outnumber that of children by 2034. This combined with the fact that the prevalence of Alzheimer's Disease increases exponentially with age makes for a worrying combination. Mild cognitive impairment (MCI) is an intermediate stage of cognitive decline between being cognitively normal and having full-blown Dementia, with every third person with MCI progressing to dementia of the Alzheimer's Type (DAT). While there is no known way to reverse symptoms once they begin, early prediction of disease can help stall its progression and help with early financial planning. While grey matter volume loss in the Hippocampus and Entorhinal Cortex (EC) are characteristic biomarkers of DAT, little is known about the rates of decrease of these volumes within individuals in MCI state across time. We used longitudinal growth curve models to map individual trajectories of volume loss in subjects with MCI. We then looked at whether these rates of volume decrease could predict progression to DAT right in the MCI stage. Finally, we evaluated whether these rates of Hippocampal and EC volume loss were correlated with individual rates of decline of episodic memory, visuospatial ability, and executive function.
Brain Basics: A peak into the Brain!
My talk will be a ’Neuro 101’ - also called ‘Basics of Neuroscience’. I hope to introduce the field of Neuroscience and give a brief glimpse into the function, history and evolution of the brain. I will guide you through questions such as - What is a brain? What are its basic building blocks and functions?
NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks
The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.
Adaptive bottleneck to pallium for sequence memory, path integration and mixed selectivity representation
Spike-driven adaptation involves intracellular mechanisms that are initiated by neural firing and lead to the subsequent reduction of spiking rate followed by a recovery back to baseline. We report on long (>0.5 second) recovery times from adaptation in a thalamic-like structure in weakly electric fish. This adaptation process is shown via modeling and experiment to encode in a spatially invariant manner the time intervals between event encounters, e.g. with landmarks as the animal learns the location of food. These cells also come in two varieties, ones that care only about the time since the last encounter, and others that care about the history of encounters. We discuss how the two populations can share in the task of representing sequences of events, supporting path integration and converting from ego-to-allocentric representations. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus. Finally we discuss how all the cells of this gateway to the pallium exhibit mixed selectivity of social features of their environment. The data and computational modeling further reveal that, in contrast to a long-held belief, these gymnotiform fish are endowed with a corollary discharge, albeit only for social signalling.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.
A Functional Approach to Analogical Reasoning in Scientific Practice
The talk argues for a new approach to analysing analogical reasoning in scientific practice. Traditionally, philosophers of science tend to analyse analogical reasoning in either a top-down way or a bottom-up way. Examples of top-down approaches include Mary Hesse’s seminal work (1963) and Paul Bartha’s articulation model (2010), while most popular bottom-up approach is John Norton’s material approach (2018). I will address the problems of these traditional approaches and introduce an alternative approach, which is motivated by my exemplar-based approach to the history of science, defended in my recent book (2020).
Chapter 1. Reconstructing history
Understanding the role of prediction in sensory encoding
At any given moment the brain receives more sensory information than it can use to guide adaptive behaviour, creating the need for mechanisms that promote efficient processing of incoming sensory signals. One way in which the brain might reduce its sensory processing load is to encode successive presentations of the same stimulus in a more efficient form, a process known as neural adaptation. Conversely, when a stimulus violates an expected pattern, it should evoke an enhanced neural response. Such a scheme for sensory encoding has been formalised in predictive coding theories, which propose that recent experience establishes expectations in the brain that generate prediction errors when violated. In this webinar, Professor Jason Mattingley will discuss whether the encoding of elementary visual features is modulated when otherwise identical stimuli are expected or unexpected based upon the history of stimulus presentation. In humans, EEG was employed to measure neural activity evoked by gratings of different orientations, and multivariate forward modelling was used to determine how orientation selectivity is affected for expected versus unexpected stimuli. In mice, two-photon calcium imaging was used to quantify orientation tuning of individual neurons in the primary visual cortex to expected and unexpected gratings. Results revealed enhanced orientation tuning to unexpected visual stimuli, both at the level of whole-brain responses and for individual visual cortex neurons. Professor Mattingley will discuss the implications of these findings for predictive coding theories of sensory encoding. Professor Jason Mattingley is a Laureate Fellow and Foundation Chair in Cognitive Neuroscience at The University of Queensland. His research is directed toward understanding the brain processes that support perception, selective attention and decision-making, in health and disease.
“From the Sublime to the Stomatopod: the story from beginning to nowhere near the end.”
“Call me a marine vision scientist. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see what animals see in the watery part of the world. It is a way I have of dividing off the spectrum, and regulating circular polarisation.” Sometimes I wish I had just set out to harpoon a white whale as it would have been easier than studying stomatopod (mantis shrimp) vision. Nowhere near as much fun of course and certainly less dangerous so in this presentation I track the history of discovery and confusion that stomatopods deliver in trying to understand what the do actually see. The talk unashamedly borrows from that of Mike Bok a few weeks ago (April 13th 2021 “The Blurry Beginnings: etc” talk) as an introduction to the system (do go look at his talk again, it is beautiful!) and goes both backwards and forwards in time, trying to provide an explanation for the design of this visual system. The journey is again one of retinal anatomy and physiology, neuroanatomy, electrophysiology, behaviour and body ornaments but this time focusses more on polarisation vision (Mike covered the colour stuff well). There is a comparative section looking at the cephalopods too and by the end, I hope you will understand where we are at with trying to understand this extraordinary way of seeing the world and why we ‘pod-people’ wave our arms around so much when asked to explain; what do stomatopods see? Maybe, to butcher another quote: “mantis shrimp have been rendered visually beautiful for vision’s sake.”
Making memories in mice
Understanding how the brain uses information is a fundamental goal of neuroscience. Several human disorders (ranging from autism spectrum disorder to PTSD to Alzheimer’s disease) may stem from disrupted information processing. Therefore, this basic knowledge is not only critical for understanding normal brain function, but also vital for the development of new treatment strategies for these disorders. Memory may be defined as the retention over time of internal representations gained through experience, and the capacity to reconstruct these representations at later times. Long-lasting physical brain changes (‘engrams’) are thought to encode these internal representations. The concept of a physical memory trace likely originated in ancient Greece, although it wasn’t until 1904 that Richard Semon first coined the term ‘engram’. Despite its long history, finding a specific engram has been challenging, likely because an engram is encoded at multiple levels (epigenetic, synaptic, cell assembly). My lab is interested in understanding how specific neurons are recruited or allocated to an engram, and how neuronal membership in an engram may change over time or with new experience. Here I will describe both older and new unpublished data in our efforts to understand memories in mice.
As soon as there was life there was danger
Organisms face challenges to survival throughout life. When we freeze or flee in danger, we often feel fear. Tracing the deep history of danger gives a different perspective. The first cells living billions of years ago had to detect and respond to danger in order to survive. Life is about not being dead, and behavior is a major way that organisms hold death off. Although behavior does not require a nervous system, complex organisms have brain circuits for detecting and responding to danger, the deep roots of which go back to the first cells. But these circuits do not make fear, and fear is not the cause of why we freeze or flee. Fear a human invention; a construct we use to account for what happens in our minds when we become aware that we are in harm’s way. This requires a brain that can personally know that it existed in the past, that it is the entity that might be harmed in the present, and that it will cease to exist it the future. If other animals have conscious experiences, they cannot have the kinds of conscious experiences we have because they do not have the kinds of brains we have. This is not meant as a denial of animal consciousness; it is simply a statement about the fact that every species has a different brain. Nor is it a declaration about the wonders of the human brain, since we have done some wonderful, but also horrific, things with our brains. In fact, we are on the way to a climatic disaster that will not, as some suggest, destroy the Earth. But it will make it inhabitable for our kind, and other organisms with high energy demands. Bacteria have made it for billions of years and will likely be fine. The rest is up for grabs, and, in a very real sense, up to us.
Evolving Neural Networks
Evolution has shaped neural circuits in a very specific manner, slowly and aimlessly incorporating computational innovations that increased the chances to survive and reproduce of the newly born species. The discoveries done by the Evolutionary Developmental (Evo-Devo) biology field during the last decades have been crucial for our understanding of the gradual emergence of such innovations. In turn, Computational Neuroscience practitioners modeling the brain are becoming increasingly aware of the need to build models that incorporate these innovations to replicate the computational strategies used by the brain to solve a given task. The goal of this workshop is to bring together experts from Systems and Computational Neuroscience, Machine Learning and the Evo-Devo field to discuss if and how knowing the evolutionary history of neural circuits can help us understand the way the brain works, as well as the relative importance of learned VS innate neural mechanisms.
The history, future and ethics of self-experimentation
Modern day “neurohackers” are radically self-experimenting, attempting genomic modification with CRISPR-Cas9 constructs and electrode insertion into their cortex amongst a host of other things. Institutions wanting to avoid the risks bought on by these procedures, generally avoid involvement with self-experimenting research. Modern day “neurohackers” are radically self-experimenting, attempting genomic modification with CRISPR-Cas9 constructs and electrode insertion into their cortex amongst a host of other things. Institutions wanting to avoid the risks bought on by these procedures, generally avoid involvement with self-experimenting research. But what is the ethical thing to do? Should researchers be allowed or encouraged to self-experiment? Should institutions support or hinder them? Where do you think that this process of self-experimentation could take us? This presentation by Dr Matt Lennon and Professor Zoltan Molnar of the University of Oxford, will explore the history, future and ethics of self-experimentation. It will explore notable examples of self-experimenters including Isaac Newton, Angelo Ruffini and Oliver Sacks and how a number of these pivotal experiments created paradigm shifts in neuroscience. The presentation will open up a forum for all participants to be involved asking key ethical questions around what should and should not be allowed in self-experimentation research.
On cognitive maps and reinforcement learning in large-scale animal behaviour
Bats are extreme aviators and amazing navigators. Many bat species nightly commute dozens of kilometres in search of food, and some bat species annually migrate over thousands of kilometres. Studying bats in their natural environment has always been extremely challenging because of their small size (mostly <50 gr) and agile nature. We have recently developed novel miniature technology allowing us to GPS-tag small bats, thus opening a new window to document their behaviour in the wild. We have used this technology to track fruit-bats pups over 5 months from birth to adulthood. Following the bats’ full movement history allowed us to show that they use novel short-cuts which are typical for cognitive-map based navigation. In a second study, we examined how nectar-feeding bats make foraging decisions under competition. We show that by relying on a simple reinforcement learning strategy, the bats can divide the resource between them without aggression or communication. Together, these results demonstrate the power of the large scale natural approach for studying animal behavior.
Dr Lindsay reads from "Models of the Mind : How Physics, Engineering and Mathematics Shaped Our Understanding of the Brain" 📖
Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics? — The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories. — At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience. — Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.
Recurrent network dynamics lead to interference in sequential learning
Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.
The anterior insular cortex in the rat exerts an inhibitory influence over the loss of control of heroin intake and subsequent propensity to relapse
The anterior insular cortex (AIC) has been implicated in addictive behaviour, including the loss of control over drug intake, craving and the propensity to relapse. Evidence suggests that the influence of the AIC on drug-related behaviours is complex as in rats exposed to extended access to cocaine self-administration, the AIC was shown to exert a state-dependent, bidirectional influence on the development and expression of loss of control over drug intake, facilitating the latter but impairing the former. However, it is unclear whether this influence of the AIC is confined to stimulant drugs that have marked peripheral sympathomimetic and anxiogenic effects or whether it extends to other addictive drugs, such as opiates, that lack overt acute aversive peripheral effects. We investigated in outbred rats the effects of bilateral excitotoxic lesions of AIC induced both prior to or after long-term exposure to extended access heroin self-administration, on the development and maintenance of escalated heroin intake and the subsequent vulnerability to relapse following abstinence. Compared to sham surgeries, pre-exposure AIC lesions had no effect on the development of loss of control over heroin intake, but lesions made after a history of escalated heroin intake potentiated escalation and also enhanced responding at relapse. These data show that the AIC inhibits or limits the loss of control over heroin intake and propensity to relapse, in marked contrast to its influence on the loss of control over cocaine intake.
Rare Disease Natural History Studies: Experience from the GNAO1 Natural History study in a pre and postpandemic world
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.
The idea of the brain
More than mere association: Are some figure-ground organisation processes mediated by perceptual grouping mechanisms?
Figure-ground organisation and perceptual grouping are classic topics in Gestalt and perceptual psychology. They often appear alongside one another in introductory textbook chapters on perception and have a long history of investigation. However, they are typically discussed as separate processes of perceptual organisation with their own distinct phenomena and mechanisms. Here, I will propose that perceptual grouping and figure-ground organisation are strongly linked. In particular, perceptual grouping can provide a basis for, and may share mechanisms with, a wide range of figure-ground principles. To support this claim, I will describe a new class of figure-ground principles based on perceptual grouping between edges and demonstrate that this inter-edge grouping (IEG) is a powerful influence on figure-ground organisation. I will also draw support from our other results showing that grouping between edges and regions (i.e., edge-region grouping) can affect figure-ground organisation (Palmer & Brooks, 2008) and that contextual influences in figure-ground organisation can be gated by perceptual grouping between edges (Brooks & Driver, 2010). In addition to these modern observations, I will also argue that we can describe some classic figure-ground principles (e.g., symmetry, convexity, etc.) using perceptual grouping mechanisms. These results suggest that figure-ground organisation and perceptual grouping have more than a mere association under the umbrella topics of Gestalt psychology and perceptual organisation. Instead, perceptual grouping may provide a mechanism underlying a broad class of new and extant figure-ground principles.
Cognition plus longevity equals culture: A new framework for understanding human brain evolution
Narratives of human evolution have focused on cortical expansion and increases in brain size relative to body size, but considered that changes in life history, such as in age at sexual maturity and thus the extent of childhood and maternal dependence, or maximal longevity, are evolved features that appeared as consequences of selection for increased brain size, or increased cognitive abilities that decrease mortality rates, or due to selection for grandmotherly contribution to feeding the young. Here I build on my recent finding that slower life histories universally accompany increased numbers of cortical neurons across warm-blooded species to propose a simpler framework for human evolution: that slower development to sexual maturity and increased post-maturity longevity are features that do not require selection, but rather inevitably and immediately accompany evolutionary increases in numbers of cortical neurons, thus fostering human social interactions and cultural and technological evolution as generational overlap increases.
Neural mechanisms of aggression
Aggression is an innate social behavior essential for competing for resources, securing mates, defending territory and protecting the safety of oneself and family. In the last decade, significant progress has been made towards an understanding of the neural circuit underlying aggression using a set of modern neuroscience tools. Here, I will talk about the history and recent progress in the study of aggression.
On cognitive maps and reinforcement learning in large-scale animal behaviour
Bats are extreme aviators and amazing navigators. Many bat species nightly com-mute dozens of kilometres in search of food, and some bat species annually migrate over thousands of kilometres. Studying bats in their natural environment has al-ways been extremely challenging because of their small size (mostly <50 gr) and agile nature. We have recently developed novel miniature technology allowing us to GPS-tag small bats, thus opening a new window to document their behaviour in the wild. We have used this technology to track fruit-bats pups over 5 months from birth to adulthood. Following the bats’ full movement history allowed us to show that they use novel short-cuts which are typical for cognitive-map based naviga-tion. In a second study, we examined how nectar-feeding bats make foraging deci-sions under competition. We show that by relying on a simple reinforcement learn-ing strategy, the bats can divide the resource between them without aggression or communication. Together, these results demonstrate the power of the large scale natural approach for studying animal behavior.
A conversation with Gerald Westheimer about the history and future of visual neuroscience with a retinal perspective
Childhood as a solution to explore-exploit tensions
I argue that the evolution of our life history, with its distinctively long, protected human childhood allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore-exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of new empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.
Dynamic computation in the retina by retuning of neurons and synapses
How does a circuit of neurons process sensory information? And how are transformations of neural signals altered by changes in synaptic strength? We investigate these questions in the context of the visual system and the lateral line of fish. A distinguishing feature of our approach is the imaging of activity across populations of synapses – the fundamental elements of signal transfer within all brain circuits. A guiding hypothesis is that the plasticity of neurotransmission plays a major part in controlling the input-output relation of sensory circuits, regulating the tuning and sensitivity of neurons to allow adaptation or sensitization to particular features of the input. Sensory systems continuously adjust their input-output relation according to the recent history of the stimulus. A common alteration is a decrease in the gain of the response to a constant feature of the input, termed adaptation. For instance, in the retina, many of the ganglion cells (RGCs) providing the output produce their strongest responses just after the temporal contrast of the stimulus increases, but the response declines if this input is maintained. The advantage of adaptation is that it prevents saturation of the response to strong stimuli and allows for continued signaling of future increases in stimulus strength. But adaptation comes at a cost: a reduced sensitivity to a future decrease in stimulus strength. The retina compensates for this loss of information through an intriguing strategy: while some RGCs adapt following a strong stimulus, a second population gradually becomes sensitized. We found that the underlying circuit mechanisms involve two opposing forms of synaptic plasticity in bipolar cells: synaptic depression causes adaptation and facilitation causes sensitization. Facilitation is in turn caused by depression in inhibitory synapses providing negative feedback. These opposing forms of plasticity can cause simultaneous increases and decreases in contrast-sensitivity of different RGCs, which suggests a general framework for understanding the function of sensory circuits: plasticity of both excitatory and inhibitory synapses control dynamic changes in tuning and gain.
Reward foraging task, and model-based analysis reveal how fruit flies learn the value of available options
Understanding what drives foraging decisions in animals requires careful manipulation of the value of available options while monitoring animal choices. Value-based decision-making tasks, in combination with formal learning models, have provided both an experimental and theoretical framework to study foraging decisions in lab settings. While these approaches were successfully used in the past to understand what drives choices in mammals, very little work has been done on fruit flies. This is even though fruit flies have served as a model organism for many complex behavioural paradigms. To fill this gap we developed a single-animal, trial-based decision-making task, where freely walking flies experienced optogenetic sugar-receptor neuron stimulation. We controlled the value of available options by manipulating the probabilities of optogenetic stimulation. We show that flies integrate a reward history of chosen options and forget value of unchosen options. We further discover that flies assign higher values to rewards experienced early in the behavioural session, consistent with formal reinforcement learning models. Finally, we show that the probabilistic rewards affect walking trajectories of flies, suggesting that accumulated value is controlling the navigation vector of flies in a graded fashion. These findings establish the fruit fly as a model organism to explore the genetic and circuit basis of value-based decisions.
Exploring the Genetics of Parkinson's Disease: Past, Present, and Future
In this talk, Dr Singleton will discuss the progress made so far in understanding the genetic basis of Parkinson’s disease. He will cover the history of discovery from the first identification of disease causing mutations to the state of knowledge in the field today, more that 20 years after that initial discovery. He will then discuss current initiatives and the promise of these for informing the understanding and treatment of Parkinson’s disease. Lastly, Dr Singleton will talk about current gaps in research and knowledge and working together to fill these.
Understanding machine learning via exactly solvable statistical physics models
The affinity between statistical physics and machine learning has long history, this is reflected even in the machine learning terminology that is in part adopted from physics. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the learning algorithm.
Sensory priors, and choice and outcome history in service of optimal behaviour in noisy environments
COSYNE 2023
Inverse reinforcement learning with switching rewards and history dependency for studying behaviors
COSYNE 2025
Kinematic data predict risk of future falls in patients with Parkinson’s disease without a history of falls: A five-year prospective study
FENS Forum 2024
history coverage
43 items