← Back

Learning Model

Topic spotlight
TopicWorld Wide

learning model

Discover seminars, jobs, and research tagged with learning model across World Wide.
28 curated items18 Seminars10 ePosters
Updated 5 months ago
28 items · learning model
28 results
SeminarPsychology

Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives

Lukasz Piwek
University of Bath & Cumulus Neuroscience Ltd
Jul 6, 2025

Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.

SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 11, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

SeminarPsychology

Automated generation of face stimuli: Alignment, features and face spaces

Carl Gaspar
Zayed University (UAE)
Jan 31, 2023

I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.

SeminarNeuroscienceRecording

No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit

Rylan Schaeffer
Fiete lab, MIT
Nov 1, 2022

Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.

SeminarNeuroscienceRecording

Implementing structure mapping as a prior in deep learning models for abstract reasoning

Shashank Shekhar
University of Guelph
Mar 2, 2022

Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.

SeminarNeuroscienceRecording

A reward-learning framework of knowledge acquisition

Kou Murayama
Tübingen University
Jun 17, 2021

Recent years have seen a considerable surge of research on interest-based engagement, examining how and why people are engaged in activities without relying on extrinsic rewards. However, the field of inquiry has been somewhat segregated into three different research traditions which have been developed relatively independently --- research on curiosity, interest, and trait curiosity/interest. The current talk sets out an integrative perspective; the reward-learning framework of knowledge acquisition. This conceptual framework takes on the basic premise of existing reward-learning models of information seeking: that knowledge acquisition serves as an inherent reward, which reinforces people’s information-seeking behavior through a reward-learning process. However, the framework reveals how the knowledge-acquisition process is sustained and boosted over a long period of time in real-life settings, allowing us to integrate the different research traditions within reward-learning models. The framework also characterizes the knowledge-acquisition process with four distinct features that are not present in the reward-learning process with extrinsic rewards --- (1) cumulativeness, (2) selectivity, (3) vulnerability, and (4) under-appreciation. The talk describes some evidence from our lab supporting these claims.

SeminarNeuroscience

Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Neural Dynamics & Computation Lab, Stanford University
Jun 16, 2021

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

A reward-learning framework of knowledge acquisition: How we can integrate the concepts of curiosity, interest, and intrinsic-extrinsic rewards

Kou Murayama
Tübingen University
Jun 10, 2021

Recent years have seen a considerable surge of research on interest-based engagement, examining how and why people are engaged in activities without relying on extrinsic rewards. However, the field of inquiry has been somewhat segregated into three different research traditions which have been developed relatively independently -- research on curiosity, interest, and trait curiosity/interest. The current talk sets out an integrative perspective; the reward-learning framework of knowledge acquisition. This conceptual framework takes on the basic premise of existing reward-learning models of information seeking: that knowledge acquisition serves as an inherent reward, which reinforces people’s information-seeking behavior through a reward-learning process. However, the framework reveals how the knowledge-acquisition process is sustained and boosted over a long period of time in real-life settings, allowing us to integrate the different research traditions within reward-learning models. The framework also characterizes the knowledge-acquisition process with four distinct features that are not present in the reward-learning process with extrinsic rewards -- (1) cumulativeness, (2) selectivity, (3) vulnerability, and (4) under-appreciation. The talk describes some evidence from our lab supporting these claims.

SeminarNeuroscienceRecording

Transforming task representations

Andrew Lampinen
DeepMind
May 12, 2021

Humans can adapt to a novel task on our first try. By contrast, artificial intelligence systems often require immense amounts of data to adapt. In this talk, I will discuss my recent work (https://www.pnas.org/content/117/52/32970) on creating deep learning systems that can adapt on their first try by exploiting relationships between tasks. Specifically, the approach is based on transforming a representation for a known task to produce a representation for the novel task, by inferring and then using a higher order function that captures a relationship between the tasks. This approach can be interpreted as a type of analogical reasoning. I will show that task transformation can allow systems to adapt to novel tasks on their first try in domains ranging from card games, to mathematical objects, to image classification and reinforcement learning. I will discuss the analogical interpretation of this approach, an analogy between levels of abstraction within the model architecture that I refer to as homoiconicity, and what this work might suggest about using deep-learning models to infer analogies more generally.

SeminarNeuroscience

A machine learning way to analyse white matter tractography streamlines / Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI

Dr Shenjun Zhong and Dr Kamlesh Pawar
Monash Biomedical Imaging
Mar 10, 2021

1. Embedding is all you need: A machine learning way to analyse white matter tractography streamlines - Dr Shenjun Zhong, Monash Biomedical Imaging Embedding white matter streamlines with various lengths into fixed-length latent vectors enables users to analyse them with general data mining techniques. However, finding a good embedding schema is still a challenging task as the existing methods based on spatial coordinates rely on manually engineered features, and/or labelled dataset. In this webinar, Dr Shenjun Zhong will discuss his novel deep learning model that identifies latent space and solves the problem of streamline clustering without needing labelled data. Dr Zhong is a Research Fellow and Informatics Officer at Monash Biomedical Imaging. His research interests are sequence modelling, reinforcement learning and federated learning in the general medical imaging domain. 2. Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI - Dr Kamlesh Pawar, Monash Biomedical imaging Magnetic Resonance Imaging (MRI) is a widely used imaging modality in clinics and research. Although MRI is useful it comes with an overhead of longer scan time compared to other medical imaging modalities. The longer scan times also make patients uncomfortable and even subtle movements during the scan may result in severe motion artifact in the images. In this seminar, Dr Kamlesh Pawar will discuss how artificial intelligence techniques can reduce scan time and correct motion artifacts. Dr Pawar is a Research Fellow at Monash Biomedical Imaging. His research interest includes deep learning, MR physics, MR image reconstruction and computer vision.

SeminarNeuroscienceRecording

Cross Domain Generalisation in Humans and Machines

Leonidas Alex Doumas
The University of Edinburgh
Feb 3, 2021

Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.

SeminarNeuroscience

The Spatial Memory Pipeline: a deep learning model of egocentric to allocentric understanding in mammalian brains

Benigno Uria
DeepMind
Jan 12, 2021
SeminarNeuroscience

Generalization guided exploration

Charley Wu
Max Planck
Dec 15, 2020

How do people learn in real-world environments where the space of possible actions can be vast or even infinite? The study of human learning has made rapid progress in past decades, from discovering the neural substrate of reward prediction errors, to building AI capable of mastering the game of Go. Yet this line of research has primarily focused on learning through repeated interactions with the same stimuli. How are humans able to rapidly adapt to novel situations and learn from such sparse examples? I propose a theory of how generalization guides human learning, by making predictions about which unobserved options are most promising to explore. Inspired by Roger Shepard’s law of generalization, I show how a Bayesian function learning model provides a mechanism for generalizing limited experiences to a wide set of novel possibilities, based on the simple principle that similar actions produce similar outcomes. This model of generalization generates predictions about the expected reward and underlying uncertainty of unexplored options, where both are vital components in how people actively explore the world. This model allows us to explain developmental differences in the explorative behavior of children, and suggests a general principle of learning across spatial, conceptual, and structured domains.

SeminarNeuroscienceRecording

Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Stanford University
Oct 15, 2020

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition.  However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling.  We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process.  In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

Synthesizing Machine Intelligence in Neuromorphic Computers with Differentiable Programming

Emre Neftci
University of California Irvine
Aug 30, 2020

The potential of machine learning and deep learning to advance artificial intelligence is driving a quest to build dedicated computers, such as neuromorphic hardware that emulate the biological processes of the brain. While the hardware technologies already exist, their application to real-world tasks is hindered by the lack of suitable programming methods. Advances at the interface of neural computation and machine learning showed that key aspects of deep learning models and tools can be transferred to biologically plausible neural circuits. Building on these advances, I will show that differentiable programming can address many challenges of programming spiking neural networks for solving real-world tasks, and help devise novel continual and local learning algorithms. In turn, these new algorithms pave the road towards systematically synthesizing machine intelligence in neuromorphic hardware without detailed knowledge of the hardware circuits.

SeminarNeuroscience

Reward foraging task, and model-based analysis reveal how fruit flies learn the value of available options

Duda Kvitsiani
Aarhus University
Jul 28, 2020

Understanding what drives foraging decisions in animals requires careful manipulation of the value of available options while monitoring animal choices. Value-based decision-making tasks, in combination with formal learning models, have provided both an experimental and theoretical framework to study foraging decisions in lab settings. While these approaches were successfully used in the past to understand what drives choices in mammals, very little work has been done on fruit flies. This is even though fruit flies have served as a model organism for many complex behavioural paradigms. To fill this gap we developed a single-animal, trial-based decision-making task, where freely walking flies experienced optogenetic sugar-receptor neuron stimulation. We controlled the value of available options by manipulating the probabilities of optogenetic stimulation. We show that flies integrate a reward history of chosen options and forget value of unchosen options. We further discover that flies assign higher values to rewards experienced early in the behavioural session, consistent with formal reinforcement learning models. Finally, we show that the probabilistic rewards affect walking trajectories of flies, suggesting that accumulated value is controlling the navigation vector of flies in a graded fashion. These findings establish the fruit fly as a model organism to explore the genetic and circuit basis of value-based decisions.

SeminarNeuroscience

Delineating Reward/Avoidance Decision Process in the Impulsive-compulsive Spectrum Disorders through a Probabilistic Reversal Learning Task

Xiaoliu Zhang
Monash University
Jul 18, 2020

Impulsivity and compulsivity are behavioural traits that underlie many aspects of decision-making and form the characteristic symptoms of Obsessive Compulsive Disorder (OCD) and Gambling Disorder (GD). The neural underpinnings of aspects of reward and avoidance learning under the expression of these traits and symptoms are only partially understood. " "The present study combined behavioural modelling and neuroimaging technique to examine brain activity associated with critical phases of reward and loss processing in OCD and GD. " "Forty-two healthy controls (HC), forty OCD and twenty-three GD participants were recruited in our study to complete a two-session reinforcement learning (RL) task featuring a “probability switch (PS)” with imaging scanning. Finally, 39 HC (20F/19M, 34 yrs +/- 9.47), 28 OCD (14F/14M, 32.11 yrs ±9.53) and 16 GD (4F/12M, 35.53yrs ± 12.20) were included with both behavioural and imaging data available. The functional imaging was conducted by using 3.0-T SIEMENS MAGNETOM Skyra syngo MR D13C at Monash Biomedical Imaging. Each volume compromised 34 coronal slices of 3 mm thickness with 2000 ms TR and 30 ms TE. A total of 479 volumes were acquired for each participant in each session in an interleaved-ascending manner. " " The standard Q-learning model was fitted to the observed behavioural data and the Bayesian model was used for the parameter estimation. Imaging analysis was conducted using SPM12 (Welcome Department of Imaging Neuroscience, London, United Kingdom) in the Matlab (R2015b) environment. The pre-processing commenced with the slice timing, realignment, normalization to MNI space according to T1-weighted image and smoothing with a 8 mm Gaussian kernel. " " The frontostriatal brain circuit including the putamen and medial orbitofrontal (mOFC) were significantly more active in response to receiving reward and avoiding punishment compared to receiving an aversive outcome and missing reward at 0.001 with FWE correction at cluster level; While the right insula showed greater activation in response to missing rewards and receiving punishment. Compared to healthy participants, GD patients showed significantly lower activation in the left superior frontal and posterior cingulum at 0.001 for the gain omission. " " The reward prediction error (PE) signal was found positively correlated with the activation at several clusters expanding across cortical and subcortical region including the striatum, cingulate, bilateral insula, thalamus and superior frontal at 0.001 with FWE correction at cluster level. The GD patients showed a trend of decreased reward PE response in the right precentral extending to left posterior cingulate compared to controls at 0.05 with FWE correction. " " The aversive PE signal was negatively correlated with brain activity in regions including bilateral thalamus, hippocampus, insula and striatum at 0.001 with FWE correction. Compared with the control group, GD group showed an increased aversive PE activation in the cluster encompassing right thalamus and right hippocampus, and also the right middle frontal extending to the right anterior cingulum at 0.005 with FWE correction. " " Through the reversal learning task, the study provided a further support of the dissociable brain circuits for distinct phases of reward and avoidance learning. Also, the OCD and GD is characterised by aberrant patterns of reward and avoidance processing.

ePoster

How Do Bees See the World? A (Normative) Deep Reinforcement Learning Model for Insect Navigation

Stephan Lochner, Andrew Straw

Bernstein Conference 2024

ePoster

Linking tonic dopamine and biased value predictions in a biologically inspired reinforcement learning model

COSYNE 2022

ePoster

Linking tonic dopamine and biased value predictions in a biologically inspired reinforcement learning model

COSYNE 2022

ePoster

A predictive learning model for cognitive maps that generate replay

Daniel Levenstein, Adrien Peyrache, Blake Richards

COSYNE 2023

ePoster

Violations of transitivity disrupt relational inference in humans and reinforcement learning models

Thomas Graham & Bernhard Spitzer

COSYNE 2023

ePoster

Temporal difference learning models explain behavior and dopamine during contingency degradation

Mark Burrell, Venkatesh Murthy, Naoshige Uchida, Lechen Qian, Jay Hennig, Sara Matias, Samuel Gershman

COSYNE 2025

ePoster

Cognitive and intelligence measures for ADHD identification by machine learning models

Adelia-Solás Martínez-Évora, Paula Díaz Marquiegui, Gianluca Susi, Fernando Maestú

FENS Forum 2024

ePoster

Describing neural encoding from large-scale brain recordings: A deep learning model of the central auditory system

Fotios Drakopoulos, Yiqing Xia, Andreas Fragner, Nicholas A Lesica

FENS Forum 2024

ePoster

An explainable deep learning model for the identification of layers and areas in the primate cerebral cortex

Piotr Majka, Adam Datta, Agata Kulesza, Sylwia Bednarek, Marcello Rosa

FENS Forum 2024

ePoster

Predicting Math and Story-Related Auditory Tasks Completed in fMRI using a Logistic Regression Machine Learning Model

Mary Bassey

Neuromatch 5