← Back

Adaptive Learning

Topic spotlight
TopicWorld Wide

adaptive learning

Discover seminars, jobs, and research tagged with adaptive learning across World Wide.
10 curated items6 Seminars3 ePosters1 Position
Updated 2 days ago
10 items · adaptive learning
10 results
Position

Prof. Jason Corso

University of Michigan
University of Michigan
Dec 5, 2025

The Corso Group (COG) at the University of Michigan is recruiting 2-3 talented, self-motivated, and creative PhD students for the 2024-2025 academic year. Led by Prof. Jason Corso, COG has been pioneering advances in physical AI and visual AI for the last two decades. We've contributed seminal work in areas such as machine learning foundations, video understanding (including the first paper on video captioning), human-in-the-loop computer vision, and interactive physical systems.

SeminarNeuroscienceRecording

Do large language models solve verbal analogies like children do?

Claire Stevenson
University of Amsterdam
Nov 16, 2022

Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.

SeminarNeuroscience

Lifelong Learning AI via neuro inspired solutions

Hava Siegelmann
University of Massachusetts Amherst
Oct 26, 2022

AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.

SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarNeuroscienceRecording

Learning and updating structured knowledge

Oded Bein
Niv lab, Princeton University
Oct 5, 2021

During our everyday lives, much of what we experience is familiar and predictable. We typically follow the same morning routine, take the same route to work, and encounter the same colleagues. However, every once in a while, we encounter a surprising event that violates our expectations. When we encounter such violations of our expectations, it is adaptive to update our internal model of the world in order to make better predictions in the future. The hippocampus is thought to support both the learning of the predictable structure of our environment, as well as the detection and encoding of violations. However, the hippocampus is a complex and heterogeneous structure, composed of different subfields that are thought to subserve different functions. As such, it is not yet known how the hippocampus accomplishes the learning and updating of structured knowledge. Using behavioral methods and high-resolution fMRI, I'll show that during learning of repeated and predicted events, hippocampal subfields differentially integrate and separate event representations, thus learning the structure of ongoing experience. I then move on to discuss how when events violate our predictions, there is a shift in communication between hippocampal subfields, potentially allowing for efficient encoding of the novel and surprising information. If time permits, I'll present an additional behavioral study showing that violations of predictions promote detailed memories. Together, these studies advance our understanding of how we adaptively learn and update our knowledge.

SeminarNeuroscienceRecording

A role for dopamine in value-free learning

Luke Coddington
Dudman lab, HHMI Janelia
Jul 13, 2021

Recent success in training artificial agents and robots derives from a combination of direct learning of behavioral policies and indirect learning via value functions. Policy learning and value learning employ distinct algorithms that depend upon evaluation of errors in performance and reward prediction errors, respectively. In mammals, behavioral learning and the role of mesolimbic dopamine signaling have been extensively evaluated with respect to reward prediction errors; but there has been little consideration of how direct policy learning might inform our understanding. I’ll discuss our recent work on classical conditioning in naïve mice (https://www.biorxiv.org/content/10.1101/2021.05.31.446464v1) that provides multiple lines of evidence that phasic dopamine signaling regulates policy learning from performance errors in addition to its well-known roles in value learning. This work points towards new opportunities for unraveling the mechanisms of basal ganglia control over behavior under both adaptive and maladaptive learning conditions.

SeminarNeuroscienceRecording

Recurrent network models of adaptive and maladaptive learning

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Apr 7, 2020

During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.

ePoster

Mood as an Extrapolation Engine for Adaptive Learning \& Decision-Making

Veronica Chelu, Doina Precup

COSYNE 2025

ePoster

Raphe nucleus function in aversive valence processing between adaptive learning and social defeat in zebrafish

Hsi Chen, Ting-Yu Kan, Ming-Yi Chou

FENS Forum 2024

ePoster

The role of striatal neuromodulatory signals in adaptive learning of action value

Chiara Toschi, Matthias Fritsche, Olena Didenko, Carl Lindersson, Samuel Liebana-Garcia, Armin Lak

FENS Forum 2024