← Back

Abstraction

Topic spotlight
TopicWorld Wide

abstraction

Discover seminars, jobs, and research tagged with abstraction across World Wide.
32 curated items28 Seminars4 ePosters
Updated over 2 years ago
32 items · abstraction
32 results
SeminarNeuroscience

From spikes to factors: understanding large-scale neural computations

Mark M. Churchland
Columbia University, New York, USA
Apr 5, 2023

It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.

SeminarNeuroscienceRecording

Multimodal Blending

Seana Coulson
University of California, San Diego
Feb 8, 2023

In this talk, I’ll consider how new ideas emerge from old ones via the process of conceptual blending. I’ll start by considering analogical reasoning in problem solving and the role conceptual blending plays in these problem-solving contexts. Then I’ll consider blending in multi-modal contexts, including timelines, memes (viz. image macros), and, if time allows, zoom meetings. I suggest mappings analogy researchers have traditionally considered superficial are often important for the development of novel abstractions. Likewise, the analogue portion of multimodal blends anchors their generative capacity. Overall, these observations underscore the extent to which meaning is a socially distributed process whose intermediate products are stored in cognitive artifacts such as text and digital images.

SeminarNeuroscienceRecording

A biologically plausible inhibitory plasticity rule for world-model learning in SNNs

Z. Liao
Columbia
Nov 9, 2022

Memory consolidation is the process by which recent experiences are assimilated into long-term memory. In animals, this process requires the offline replay of sequences observed during online exploration in the hippocampus. Recent experimental work has found that salient but task-irrelevant stimuli are systematically excluded from these replay epochs, suggesting that replay samples from an abstracted model of the world, rather than verbatim previous experiences. We find that this phenomenon can be explained parsimoniously and biologically plausibly by a Hebbian spike time-dependent plasticity rule at inhibitory synapses. Using spiking networks at three levels of abstraction–leaky integrate-and-fire, biophysically detailed, and abstract binary–we show that this rule enables efficient inference of a model of the structure of the world. While plasticity has previously mainly been studied at excitatory synapses, we find that plasticity at excitatory synapses alone is insufficient to accomplish this type of structural learning. We present theoretical results in a simplified model showing that in the presence of Hebbian excitatory and inhibitory plasticity, the replayed sequences form a statistical estimator of a latent sequence, which converges asymptotically to the ground truth. Our work outlines a direct link between the synaptic and cognitive levels of memory consolidation, and highlights a potential conceptually distinct role for inhibition in computing with SNNs.

SeminarNeuroscienceRecording

Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings

Matthew Kmiecik
Evanston Hospital
Oct 19, 2022

Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.

SeminarNeuroscienceRecording

From Machine Learning to Autonomous Intelligence

Yann Le Cun
Meta-FAIR & Meta AI
Oct 18, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.

SeminarNeuroscience

From Machine Learning to Autonomous Intelligence

Yann LeCun
Meta Fair
Oct 9, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf

SeminarNeuroscience

Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex

Mohamady El-Gaby
University of Oxford
Sep 27, 2022

New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.

SeminarNeuroscience

Differences between beginning and advanced students using specific analogical stimuli during design-by-analogy

Han Lu
Tongji University
Mar 23, 2022

Studies reported the effects of different types and different levels of abstraction of analogical stimuli on designers. However, specific, single visual analogical stimuli on the effects of designers have not been reported. We define this type of stimuli as specific analogical stimuli. We used the extended linkography method to analyze the facilitating and limiting effects of specific analogical stimuli and free association analogical stimuli (nonspecific analogical stimuli) on the students' creativity at different design levels. Through an empirical study, we explored the differences in the effects of specific analogy stimuli on the students at different design levels. It clarifies the use of analogical stimuli in design and the teaching of analogical design methods in design education.

SeminarNeuroscienceRecording

Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach

Elisa Felsche
Max Planck Institute for Evolutionary Anthropology
Mar 9, 2022

Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.

SeminarNeuroscienceRecording

Implementing structure mapping as a prior in deep learning models for abstract reasoning

Shashank Shekhar
University of Guelph
Mar 2, 2022

Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.

SeminarNeuroscience

Reasoning Ability: Neural Mechanisms, Development, and Plasticity

Silvia A. Bunge, PhD
Professor, Department of Psychology & Helen Wills Neuroscience Institute, Un ...
Feb 15, 2022

Relational thinking, or the process of identifying and integrating relations between mental representations, is regularly invoked during reasoning. This mental capacity enables us to draw higher-order abstractions and generalize across situations and contexts, and we have argued that it should be included in the pantheon of executive functions. In this talk, I will briefly review our lab's work characterizing the roles of lateral prefrontal and parietal regions in relational thinking. I will then discuss structural and functional predictors of individual differences and developmental changes in reasoning.

SeminarNeuroscience

The self-consistent nature of visual perception

Alan Stocker
University of Pennsylvania
Dec 7, 2021

Vision provides us with a holistic interpretation of the world that is, with very few exceptions, coherent and consistent across multiple levels of abstraction, from scene to objects to features. In this talk I will present results from past and ongoing work in my laboratory that investigates the role top-down signals play in establishing such coherent perceptual experience. Based on the results of several psychophysical experiments I will introduce a theory of “self-consistent inference” and show how it can account for human perceptual behavior. The talk will close with a discussion of how the theory can help us understand more cognitive processes.

SeminarNeuroscienceRecording

Abstraction doesn't happen all at once (despite what some models of concept learning suggest)

Micah Goldwater
University of Sydney
Nov 17, 2021

In the past few years, there has been growing evidence that the basic ability for relational generalization starts in early infancy, with 3-month-olds seeming to learn relational abstractions with little training. Further, work with toddlers seem to suggest that relational generalizations are no more difficult than those based on objects, and they can readily consider both simultaneously. Likewise, causal learning research with adults suggests that people infer causal relationships at multiple levels of abstraction simultaneously as they learn about novel causal systems. These findings all appear counter to theories of concept learning that posit when concepts are first learned they tend to be concrete (tied to specific contexts and features) and abstraction proceeds incrementally as learners encounter more examples. The current talk will not question the veracity of any of these findings but will present several others from my and others’ research on relational learning that suggests that when the perceptual or conceptual content becomes more complex, patterns of incremental abstraction re-emerge. Further, the specific contexts and task parameters that support or hinder abstraction reveal the underlying cognitive processes. I will then consider whether the models that posit simultaneous, immediate learning at multiple levels of abstraction can accommodate these more complex patterns.

SeminarNeuroscienceRecording

Norse: A library for gradient-based learning in Spiking Neural Networks

Jens Egholm Pedersen
KTH Royal Institute of Technology
Nov 2, 2021

We introduce Norse: An open-source library for gradient-based training of spiking neural networks. In contrast to neuron simulators which mainly target computational neuroscientists, our library seamlessly integrates with the existing PyTorch ecosystem using abstractions familiar to the machine learning community. This has immediate benefits in that it provides a familiar interface, hardware accelerator support and, most importantly, the ability to use gradient-based optimization. While many parallel efforts in this direction exist, Norse emphasizes flexibility and usability in three ways. Users can conveniently specify feed-forward (convolutional) architectures, as well as arbitrarily connected recurrent networks. We strictly adhere to a functional and class-based API such that neuron primitives and, for example, plasticity rules composes. Finally, the functional core API ensures compatibility with the PyTorch JIT and ONNX infrastructure. We have made progress to support network execution on the SpiNNaker platform and plan to support other neuromorphic architectures in the future. While the library is useful in its present state, it also has limitations we will address in ongoing work. In particular, we aim to implement event-based gradient computation, using the EventProp algorithm, which will allow us to support sparse event-based data efficiently, as well as work towards support of more complex neuron models. With this library, we hope to contribute to a joint future of computational neuroscience and neuromorphic computing.

SeminarNeuroscienceRecording

Achieving Abstraction: Early Competence & the Role of the Learning Context

Caren Walker
University of California, San Diego
Jul 14, 2021

Children's emerging ability to acquire and apply relational same-different concepts is often cited as a defining feature of human cognition, providing the foundation for abstract thought. Yet, young learners often struggle to ignore irrelevant surface features to attend to structural similarity instead. I will argue that young children have--and retain--genuine relational concepts from a young age, but tend to neglect abstract similarity due to a learned bias to attend to objects and their properties. Critically, this account predicts that differences in the structure of children's environmental input should lead to differences in the type of hypotheses they privilege and apply. I will review empirical support for this proposal that has (1) evaluated the robustness of early competence in relational reasoning, (2) identified cross-cultural differences in relational and object bias, and (3) provided evidence that contextual factors play a causal role in relational reasoning. Together, these studies suggest that the development of abstract thought may be more malleable and context-sensitive than initially believed.

SeminarNeuroscienceRecording

Higher cognitive resources for efficient learning

Aurelio Cortese
ATR
Jun 17, 2021

A central issue in reinforcement learning (RL) is the ‘curse-of-dimensionality’, arising when the degrees-of-freedom are much larger than the number of training samples. In such circumstances, the learning process becomes too slow to be plausible. In the brain, higher cognitive functions (such as abstraction or metacognition) may be part of the solution by generating low dimensional representations on which RL can operate. In this talk I will discuss a series of studies in which we used functional magnetic resonance imaging (fMRI) and computational modeling to investigate the neuro-computational basis of efficient RL. We found that people can learn remarkably complex task structures non-consciously, but also that - intriguingly - metacognition appears tightly coupled to this learning ability. Furthermore, when people use an explicit (conscious) policy to select relevant information, learning is accelerated by abstractions. At the neural level, prefrontal cortex subregions are differentially involved in separate aspects of learning: dorsolateral prefrontal cortex pairs with metacognitive processes, while ventromedial prefrontal cortex with valuation and abstraction. I will discuss the implications of these findings, in particular new questions on the function of metacognition in adaptive behavior and the link with abstraction.

SeminarNeuroscience

Abstraction and inference in the prefrontal hippocampal circuitry

Tim Behrens
Oxford University, UK
May 2, 2021
SeminarNeuroscienceRecording

Structure-mapping in Human Learning

Dedre Gentner
Northwestern University
Apr 1, 2021

Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.

SeminarNeuroscience

Abstraction and Inference in the Prefrontal Hippocampal Circuitry

Tim Behrens
Oxford University
Mar 17, 2021

The cellular representations and computations that allow rodents to navigate in space have been described with beautiful precision. In this talk, I will show that some of these same computations can be found in humans doing tasks that appear very different from spatial navigation. I will describe some theory that allows us to think about spatial and non-spatial problems in the same framework, and I will try to use this theory to give a new perspective on the beautiful spatial computations that inspired it. The overall goal of this work is to find a framework where we can talk about complicated non-spatial inference problems with the same precision that is only currently available in space.

SeminarNeuroscienceRecording

Evaluating different facets of category status for promoting spontaneous transfer

Sean Snoddy
Binghamton University
Nov 16, 2020

Existing accounts of analogical transfer highlight the importance of comparison-based schema abstraction in aiding retrieval of relevant prior knowledge from memory. In this talk, we discuss an alternative view, the category status hypothesis—which states that if knowledge of a target principle is represented as a relational category, it is easier to activate as a result of categorizing (as opposed to cue-based reminding)—and briefly review supporting evidence. We then further investigate this hypothesis by designing study tasks that promote different facets of category-level representations and assess their impact on spontaneous analogical transfer. A Baseline group compared two analogous cases; the remaining groups experienced comparison plus another task intended to impact the category status of the knowledge representation. The Intension group read an abstract statement of the principle with a supporting task of generating a new case. The Extension group read two more positive cases with the task of judging whether each exemplified the target principle. The Mapping group read a contrast case with the task of revising it into a positive example of the target principle (thereby providing practice moving in both directions between type and token, i.e., evaluating a given case relative to knowledge and using knowledge to generate a revised case). The results demonstrated that both Intension and Extension groups led to transfer improvements over Baseline (with the former demonstrating both improved accessibility of prior knowledge and ability to apply relational concepts). Implications for theories of analogical transfer are discussed.

SeminarNeuroscienceRecording

The geometry of abstraction in hippocampus and pre-frontal cortex

Stefano Fusi
Columbia University
Oct 15, 2020

The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. Here we characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.

SeminarNeuroscienceRecording

Abstraction and Analogy in Natural and Artificial Intelligence

Lindsey Richland
University of California, Irvine
Oct 7, 2020

Learning by analogy is a powerful tool children’s developmental repertoire, as well as in educational contexts such as mathematics, where the key knowledge base involves building flexible schemas. However, noticing and learning from analogies develops over time and is cognitively resource intensive. I review studies that provide insight into the relationship between mechanisms driving children’s developing analogy skills, highlighting environmental inputs (parent talk and prior experiences priming attention to relations) and neuro-cognitive factors (Executive Functions and brain injury). I then note implications for mathematics learning, reviewing experimental findings that show analogy can improve learning, but also that both individual differences in EFs and environmental factors that reduce available EFs such as performance pressure can predict student learning.

SeminarNeuroscienceRecording

Is Rule Learning Like Analogy?

Stella Christie
Tsinghua University
Jul 15, 2020

Humans’ ability to perceive and abstract relational structure is fundamental to our learning. It allows us to acquire knowledge all the way from linguistic grammar to spatial knowledge to social structures. How does a learner begin to perceive structure in the world? Why do we sometimes fail to see structural commonalities across events? To begin to answer these questions, I attempt to bridge two large, yet somewhat separate research traditions in understanding human’s structural abstraction: rule learning (Marcus et al., 1999) and analogical learning (Gentner, 1989). On the one hand, rule learning research has shown humans’ domain-general ability and ease—as early as 7-month-olds—to abstract structure from a limited experience. On the other hand, analogical learning works have shown robust constraints in structural abstraction: young learners prefer object similarity over relational similarity. To understand this seeming paradox between ease and difficulty, we conducted a series of studies using the classic rule learning paradigm (Marcus et al., 1999) but with an analogical (object vs. relation) twist. Adults were presented with 2-minute sentences or events (syllables or shapes) containing a rule. At test, they had to choose between rule abstraction and object matches—the same syllable or shape they saw before. Surprisingly, while in the absence of object matches adults were perfectly capable of abstracting the rule, their ability to do so declined sharply when object matches were present. Our initial results suggest that rule learning ability may be subject to the usual constraints and signatures of analogical learning: preference to object similarity can dampen rule generalization. Humans’ abstraction is also concrete at the same time.

SeminarNeuroscienceRecording

Analogical Reasoning and Executive Functions - A Life Span Approach

Jean-Pierre Thibaut
University of Burgundy
Jul 8, 2020

From a developmental standpoint, it has been argued that two major complementary factors contribute to the development of analogy comprehension: world knowledge and executive functions. Here I will provide evidence in support of the second view. Beyond paradigms that manipulate task difficulty (e.g., number and types of distractors and semantic distance between domains) we will provide eye-tracking data that describes differences in the way children and adults compare the base and target domains in analogy problems. We will follow the same approach with ageing people. This latter population provides a unique opportunity to disentangle the contribution of knowledge and executive processes in analogy making since knowledge is (more than) preserved and executive control is decreasing. Using this paradigm, I will show the extent to which world knowledge (assessed through vocabulary) compensates for decreasing executive control in older populations. Our eye-tracking data suggests that, to a certain extent, differences between younger and older adults are analogous to the differences between younger adults and children in the way they compare the base and the target domains in analogy problems.

SeminarNeuroscienceRecording

The geometry of abstraction in artificial and biological neural networks

Stefano Fusi
Columbia University
Jun 10, 2020

The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.

ePoster

AutSim: Principled, data driven model development and abstraction for signaling in synaptic protein synthesis in Fragile X Syndrome (FXS) and healthy control.

COSYNE 2022

ePoster

Learning parsimonious dynamics for state abstraction and navigation

Tankred Saanum & Eric Schulz

COSYNE 2023

ePoster

Visual representation of different levels of abstraction along the mouse visual hierarchy

Benjie Miao, Peng Jiang, Joshua H. Siegle, Shailaja Akella, Peter Ledochowitsch, Hannah Belski, Severine Durand, Shawn R. Olsen, Xiaoxuan Jia

COSYNE 2023

ePoster

The role of sleep in the abstraction of spatial context memory in toddlers

Lisa-Marie Bastian, Eva-Maria Kurz, Hannes Noack, Jan Born

FENS Forum 2024