TopicWorld Wide

machine learning

60 Seminars25 Positions23 ePosters1 Conference

Pick a domain context

This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.

Position

Prof Iain Couzin

University of Konstanz
Konstanz, Germany
Jan 14, 2026

Despite the fact that social transmission of information is vital to many group-living animals, the organizing principles governing the networks of interaction that give rise to collective properties of animal groups, remain poorly understood. The student will employ an integrated empirical and theoretical approach to investigate the relationship between individual computation (cognition at the level of the ‘nodes’ within the social network) and collective computation (computation arising from the structure of the social network). The challenge for individuals in groups is to be both robust to noise, and yet sensitive to meaningful (often small) changes in the physical or social environment, such as when a predator is present. There exist two, non mutually-exclusive, hypotheses for how individuals in groups could modulate the degree to which sensory input to the network is amplified; 1) it could be that individuals adjust internal state variable(s) (e.g. response threshold(s)), effectively adjusting the sensitivity of the “nodes” within the network to sensory input and/or 2) it could be that individuals change their spatial relationships with neighbors (such as by modulating density) such that it is changes in the structure and strength of connections in the network that modulates the information transfer capabilities, and thus collective responsiveness, of groups. Using schooling fish as a model system we will investigate these hypotheses under a range of highly controlled, ecologically-relevant scenarios that vary in terms of timescale and type of response, including during predator avoidance as well as the search for, and exploitation of, resources. We will employ technologies such as Bayesian inference and unsupervised learning techniques developed in computational neuroscience and machine learning to identify, reconstruct, and analyze the directed and time-varying sensory networks within groups, and to relate these to the functional networks of social influence. As in neuroscience, we care about stimulus-dependent, history-dependent discrete stochastic events, including burstiness, refractoriness and habituation and throughout we will seek to isolate principles that extend beyond the specificities of our system. For more information see: https://www.smartnets-etn.eu/collective-computation-in-large-animal-groups/

PositionNeuroscience

Prof. Ross Williamson

University of Pittsburgh
Pittsburgh, PA, USA
Jan 14, 2026

The Williamson Laboratory investigates the organization and function of auditory cortical projection systems in behaving mice. We use a variety of state-of-the-art tools to probe the neural circuits of awake mice – these include two-photon calcium imaging and high-channel count electrophysiology (both with single-cell optogenetic perturbations), head-fixed behaviors (including virtual reality), and statistical approaches for neural characterization. Details on the research focus and approaches of the laboratory can be found here: https://www.williamsonlaboratory.com/research/

Position

Prof Richard Smith

Northwestern Medical School
Chicago, USA
Jan 14, 2026

The Smith lab is seeking team members to conduct exciting research in human neurodevelopment and models of neuronal activity in the prenatal brain. Interested applicants can expect to work in an environment that promotes autonomy and all the resources to develop and expand the several ongoing research projects of the lab. These include, but are not limited to, questions relating to human brain development, human disease modeling (using high throughput approaches), and therapeutics. Current NIH funded projects are examining ion flux and biophysical properties of developing cell types in the prenatal brain, specifically as is relates to childhood diseases. As a trainee you will have to opportunity gain expertise in several state-of the art approaches widely used to interrogate important aspects of neurodevelopment, including human stem cell cerebral organoid models, single cell sequencing (RNA/ATAC), high-content confocal microscopy/screening, ferret model of cortex development and hiPSC derived neuronal models (excitatory, dopamine, inhibitory). Additional physiology approaches include, 2-photon imaging, high-throughput electrophysiology, patch-clamp, and calcium/voltage imaging. Please visit our website for details about our research, www.rsmithlab.com

Position

Hayder Amin

German Center for Neurodegenerative Diseases (DZNE)
Dresden, Germany
Jan 14, 2026

The position is focused on developing a brain-inspired computational model using parallel, non-linear algorithms to investigate the neurogenesis complexity in large-scale systems. The successful applicant will specifically develop a neurogenic-plasticity-inspired bottom-up computational metamodel using our unique experimentally derived multidimensional parameters for a cortico-hippocampal circuit. The project aims to link computational modeling to experimental neuroscience to provide an explicit bidirectional prediction for complex performance and neurogenic network reserve for functional compensation to the brain demands in health and disease.

Position

Prof Richard Smith

Northwestern Medical School
Chicago, USA
Jan 14, 2026

The Smith lab is seeking a team members to conduct exciting research in human neurodevelopment and models of neuronal activity in the prenatal brain. Interested applicants can expect to work in an environment that promotes autonomy and all the resources to develop and expand the several ongoing research projects of the lab. These include, but are not limited to, questions relating to human brain development, human disease modeling (using high throughput approaches), and therapeutics. Current NIH funded projects are examining ion flux and biophysical properties of developing cell types in the prenatal brain, specifically as is relates to childhood diseases. As a trainee you will have to opportunity gain expertise in several state-of the art approaches widely used to interrogate important aspects of neurodevelopment, including human stem cell cerebral organoid models, single cell sequencing (RNA/ATAC), high-content confocal microscopy/screening, ferret model of cortex development and hiPSC derived neuronal models (excitatory, dopamine, inhibitory). Additional physiology approaches include, 2-photon imaging, high-throughput electrophysiology, patch-clamp, and calcium/voltage imaging. Please visit our website for details about our research, www.rsmithlab.com

PositionComputational Neuroscience

Dr Richard Rosch

University Children's Hospital Zürich
Zürich, Switzerland
Jan 14, 2026

In this project, we will use computational modelling on real-world neurophysiological recordings in paediatric patients with status epilepticus. We will use quantitative EEG and model-based analysis to infer changes in synaptic pathophysiology during episodes of status epilepticus in order to identify ways in which to modify the current treatment protocols.

Position

Professor Jesse Meyer

Medical College of Wisconsin
Milwaukee, United States
Jan 14, 2026

The Omics Data Science Lab led by Dr. Jesse Meyer at the Medical College of Wisconsin in Milwaukee seeks postdoctoral fellows or research scientists to spearhead studies in three areas of research focus: 1) Neurodegeneration. We develop iPSC-derived models of neurodegeneration for high throughput multi-omic analysis to discover drugs and enable understanding of neuroprotective pathways. The applicant will have a PhD (or MD with substantial laboratory experience) related to neuroscience or neurobiology. Expertise in Alzheimer’s disease or amyotrophic lateral sclerosis, iPSC-derived neurons, cellular assays, imaging, are desired. 2) Multi-Omics. We develop and apply new mass spectrometry methods to collect quantitative molecular data from biological systems more quickly (Meyer et al., Nature Methods, 2020). The applicant will have a PhD (or MD with substantial laboratory experience) related to analytical chemistry, especially mass spectrometry-based proteomics and/or metabolomics and/or associated bioinformatic skills especially machine learning. The multi-omic analysis methods we develop will be paired with machine learning to understand changes in metabolism associated with disease. 3) Data Science. We develop and apply machine learning methods to biological problems (Meyer et al. JCIM 2019, Overmyer et al. Cell Systems 2021, Dickinson and Meyer bioRxiv 2021). The applicant will have a PhD (or MD with substantial laboratory experience) related to computational biology especially machine learning and deep learning. Expertise in cheminformatics is preferred. Projects relate to chemical effect prediction and automated interpretation of omic data. Applicants must have experience in one of the above focus areas, and interest in learning the other focus areas is desired. The Omics Data Science Lab led by Dr. Jesse Meyer is a basic and translational research group in the Department of Biochemistry at the Medical College of Wisconsin. We have our own mass spectrometer (Orbitrap Exploris 240 with FAIMS) and related support equipment, and access to abundant human samples paired with EHR data through the MCW tissue bank and clinical data warehouse. The Medical College of Wisconsin is the 3rd largest private medical school in the United States and ranks in the top 1/3 of medical schools for NIH funding received. Successful applicants are expected to work independently in a collegial and supportive yet demanding environment. Potential for self-funding is welcome but not essential. Inquiries and applications (including CV, contact info for 2-3 references, and reprints of 2 most significant publications) should be directed to: Jesse G. Meyer, Ph.D. Assistant Professor, Department of Biochemistry, Medical College of Wisconsin jesmeyer@mcw.edu www.jessemeyerlab.com

Position

Ann Kennedy

Northwestern University
United States, Chicago
Jan 14, 2026

We investigate principles of computation in meso-scale biological neural networks, and the role of these networks in shaping animal behavior. We work in collaboration with experimental neuroscientists recording neural activity in freely moving animals engaged in complex behaviors, to investigate how animals' environments, actions, and internal states are represented across multiple brain areas. Our work is especially inspired by the interaction between subcortical neural populations organized into heavily recurrent neural circuits, including basal ganglia and nuclei of the hypothalamus. Project in the lab include 1) developing novel supervised, semi-supervised, and unsupervised approaches to studying the structure of animal behavior, 2) using behavior as a common basis with which to model the interactions between multiple brain areas, and 3) studying computation and dynamics in networks of heterogenous neurons communicating with multiple neuromodulators and neuropeptides. The lab will also soon begin collecting behavioral data from freely interacting mice in a variety of model lines and animal conditions, to better chart the space of interactions between animal state and behavior expression. Come join us!

Position

N/A

Open University of Cyprus, University of Cyprus
Cyprus
Jan 14, 2026

The interdisciplinary M.Sc. Program in Cognitive Systems combines courses from neural/connectionist and symbolic Artificial Intelligence, Machine Learning, and Cognitive Psychology, to explore the fundamentals of perception, attention, learning, mental representation, and reasoning, in humans and machines. The M.Sc. Program is offered jointly by two public universities in Cyprus (the Open University of Cyprus and the University of Cyprus) and has been accredited by the national Quality Assurance Agency. The program is directed by academics from the participating universities, and courses are offered in English via distance learning by an international team of instructors.

Position

Erik C. Johnson

Johns Hopkins University Applied Physics Laboratory
Laurel, MD, USA
Jan 14, 2026

The Intelligent Systems Center at JHU/APL is an interdisciplinary research center for neuroscientists, AI researchers, and roboticists. Please see the individual listings for specific postings and application instructions. Postings for Neuroscience-Inspired AI researchers and Computational Neuroscience researchers may also be posted soon. https://prdtss.jhuapl.edu/jobs/senior-neural-decoding-researcher-2219 https://prdtss.jhuapl.edu/jobs/senior-reinforcement-learning-researcher-615 https://prdtss.jhuapl.edu/jobs/senior-computer-vision-researcher-2242 https://prdtss.jhuapl.edu/jobs/artificial-intelligence-software-developer-2255

Position

Dan Goodman

Imperial College London
London, UK
Jan 14, 2026

We have a research associate (postdoc) position to work on spatial audio processing and spatial hearing using methods from machine learning. The aim of the project is to design a method for interactively fitting individualised filters for spatial audio (HRTFs) to users in real-time based on their interactions with a VR/AR environment. We will use meta-learning algorithms to minimise the time required to individualise the filters, using simulated and real interactions with large databases of synthetic and measured filters. The project has potential to become a very widely used tool in academia and industry, as existing methods for recording individualised filters are often expensive, slow, and not widely available for consumers. The role is initially available for up to 18 months, ideally starting on or soon after 1st January 2022 (although there is flexibility). The role is based in the Neural Reckoning group led by Dan Goodman in the Electrical and Electronic Engineering Department of Imperial College. You will work with other groups at Imperial, as well as with a wider consortium of universities and companies in the SONICOM project (€5.7m EU grant), led by Lorenzo Picinali at Imperial.

PositionComputational Neuroscience

Christopher Rozell

Georgia Institute of Technology
Atlanta, GA, USA
Jan 14, 2026

A postdoctoral position in computational neuroscience is available in the lab of Christopher Rozell at the Georgia Institute of Technology (Atlanta, GA). This BRAIN Initiative research project seeks to advance the field of closed-loop computational neuroscience by pioneering the use of real-time feedback stimulation during experiments to decouple recurrently connected circuit elements and make stronger causal inferences about circuit interactions. This position will have a broad opportunity to develop models and algorithms that are implemented in novel experiments using closed-loop optogenetic stimulation. We aim to provide both new scientific insight about computation in neural circuits (especially sensory coding in the thalamo-cortical circuit) as well as new approaches and algorithmic tools for the community to use in novel electrophysiology experiments. This position will work as part of a team and in close collaboration with the experimental lab of Garrett Stanley (also at Georgia Tech), and it is expected that the computational and algorithmic approaches will be implemented experimentally through close partnership with experimentalists in the Stanley Laboratory. Applicants should hold a PhD in a related discipline with a strong record of research impact, quantitative thinking and collaborative work. Experience in computational neuroscience, machine learning, feedback control, and causal inference is all advantageous. The lab is committed to providing a diverse and inclusive environment for all scholars, and applications are especially encouraged from all underrepresented groups. Additionally, the lab is committed to the professional development of the members, making it valuable preparation for people who are interested in academic, industrial or entrepreneurial careers. The position has no mandatory teaching or administrative duties. Excellent (written and oral) communication skills in English are required. This particular project is part of the Collaborative Research in Computational Neuroscience program (CRCNS), providing access to a community of researchers across the country who are focused on similar types of collaborations between computational and experimental labs. Georgia Tech's campus in the heart of midtown Atlanta, which has a thriving and collaborative neuroscience community that has a particular emphasis on computational and systems neuroscience. Atlanta is also one of the fastest-growing metropolitan areas in the United States, boasting a wide range of opportunities for recreation and culture. Georgia Tech has competitive benefits (including comprehensive medical insurance) and is an equal opportunity employer. The position would ideally start as soon as possible (spring 2021). The appointment is initially for 12 months with the expectation of renewal. Compensation will be commensurate with relevant experience. Candidates should send a CV, a statement of research experience and interests, expected date of availability, and the contact information for three references to crozell@gatech.edu with the subject line "CRCNS postdoc". Application review will proceed until the position is filled and should be received by December 1 for full consideration.

Position

Prof Virginie van Wassenhove

CEA, INSERM
Gif sur Yvette (near Paris), France
Jan 14, 2026

** Job application opened until filled ideally by the end of Feb. 2021** Applications are invited for two full-time post-doctoral cognitive neuroscientists in the European consortium “Extended-personal reality: augmented recording and transmission of virtual senses through artificial-intelligence” (see abstract p.2). EXPERIENCE involves eight academic and industrial partners with complementary expertise in artificial intelligence, neuroscience, psychiatry, neuroimaging, MEG/EEG/physiological recording techniques, and virtual-reality. The postdoctoral positions will be fully dedicated to the Scientific foundation for the Extended-Personal Reality, a work package lead by the CEA (Virginie van Wassenhove) in collaboration with Univ. of Pisa (Gaetano Valenza, Mateo Bianchi), Padova (Claudio Gentilli), Roma Tor Vergata (Nicola Toschi) and others… Full information here: https://brainthemind.files.wordpress.com/2021/01/experience_postdoctoral_adds.pdf

PositionComputational Neuroscience

Dr. Jessica Ausborn

Drexel University College of Medicine
Philadelphia, PA
Jan 14, 2026

Dr. Jessica Ausborn’s group at Drexel University College of Medicine, in the Department of Neurobiology & Anatomy has a postdoctoral position available for an exciting new research project involving computational models of sensorimotor integration based on neural and behavior data in Drosophila. The interdisciplinary collaboration with the experimental group of Dr. Katie von Reyn (School of Biomedical Engineering) will involve a variety of computational techniques including the development of biophysically detailed and more abstract mathematical models together with machine learning and data science techniques to identify and describe the algorithms computed in neuronal pathways that perform sensorimotor transformations. The Ausborn laboratory is part of an interdisciplinary group of Drexel’s Neuroengineering program that includes computational and experimental investigators. This collaborative, interdisciplinary environment enables us to probe biological systems in a way that would not be possible with either an exclusively experimental or computational approach. Applicants should forward a cover letter, curriculum vitae, statement of research interests, and contact information of three references to Jessica Ausborn (ja696@drexel.edu). Salary will be commensurate with experience based on NIH guidelines.

Position

Francisco Pereira

Machine Learning Team, National Institute of Mental Health
Bethesda, Maryland, United States of America
Jan 14, 2026

The Machine Learning Team at the National Institute of Mental Health (NIMH) in Bethesda, MD, has an open position for a machine learning research scientist. The NIMH is the leading federal agency for research on mental disorders and neuroscience, and part of the National Institutes of Health (NIH). Our mission is to help NIMH scientists use machine learning methods to address a diverse set of research problems in clinical and cognitive psychology and neuroscience. These range from identifying biomarkers for aiding diagnoses to creating and testing models of mental processes in healthy subjects. Our overarching goal is to use machine learning to improve every aspect of the scientific effort, from helping discover or develop theories to generating actionable results. For more information, please refer to the full ad https://nih-fmrif.github.io/ml/index.html

Position

Mai-Phuong Bo

Stanford University
Palo Alto, USA
Jan 14, 2026

The Stanford Cognitive and Systems Neuroscience Laboratory (scsnl.stanford.edu) invites applications for a postdoctoral fellowship in computational modeling of human cognitive, behavioral, and brain imaging data. The candidate will be involved in multidisciplinary projects to develop and implement novel neuro-cognitive computational frameworks, using multiple cutting-edge methods that may include computational cognitive modeling, Bayesian inference, dynamic brain circuit analysis, and deep neural networks. These projects will span areas including robust identification of cognitive and neurobiological signatures of psychiatric and neurological disorders, and neurodevelopmental trajectories. Clinical disorders under investigation include autism, ADHD, anxiety and mood disorders, learning disabilities, and schizophrenia. The candidate will have access to multiple large datasets and state-of-the-art computational resources, including HPCs and GPUs. Please include a CV and a statement of research interests and have three letters of reference emailed to Prof. Vinod Menon at scsnl.stanford+postdoc@gmail.com.

Position

Prof. Alessandro Crimi

Sano Center for Computational Medicine
Krakow, Poland
Jan 14, 2026

BrainAndMore (BAM) lab focuses cutting edge genetic, molecular and mainly magnetic resonance imaging technology to understand diseases and support drug development efforts in the neuroscience field. In this project we want to develop cutting-edge novel models to analyze time series brain signals as fMRI and EEG. The efforts will be mostly focused on LSTM, reservoir and similar models.

PositionGenomics

Dr Nathan Skene

UK Dementia Research Institute @ Imperial College London
London, UK
Jan 14, 2026

Using machine learning to predict cell-type specific effects of genetic variants which influence genome regulation. This PhD project is focused on using machine learning techniques to develop novel classifiers for predicting how changes in DNA sequences alter genomic regulatory features. Many regulatory proteins recognise particular DNA sequences known as motifs, for instance, EcoRI only binds to GAATTC. DNA sequences can be converted into a machine interpretable format, using one-hot encoding. The candidate will use publicly available and inhouse datasets of genomic regulatory features to train models. Machine learning techniques will be used to predict the cell-type specific regulatory effects of genetic variants. We will provide several true-positive datasets, wherein the effect of genetic mutations on particular regulatory features has been measured. These will form validation datasets to evaluate how well the trained classifier works. We are interested in how improvements in the machine learning approach (e.g. use of transfer learning, recurrent attentional networks or graph convolution networks) can be used to improve upon existing methods. The candidate will use these techniques to identify causal pathways and candidate drug targets for neurodegenerative diseases.

Position

Dr.-Ing. Alexander von Lühmann

Technische Universität Berlin, BIFOLD-ML, Machine Learning Department
Technische Universität Berlin, Fakultät IV – EECS, MAR 4-1 / Raum 4.045, Marchstr. 23, 10587 Berlin
Jan 14, 2026

The IBS Lab develops miniaturized wearable neurotechnology and body-worn sensors, as well as machine learning methods for sensing signals from the brain and body under natural conditions of the everyday world. The group focuses on multimodal analysis of physiological signals in diffuse optics (e.g. fNIRS) and biopotentials (e.g. EEG). Working field: Independent and responsible research on wearable instruments and methods for robust neurotechnology in mobile applications. Design and implementation of innovative wearable and miniaturized opto-electronic hardware for multimodal brain-body imaging using diffuse optics and biopotentials. Development of multimodal machine-learning-based sensor fusion methods for signal analysis, signal decomposition and inference from wearable physiological sensor data.

Position

Albert Cardona

MRC LMB
United Kingdom, Cambridge
Jan 14, 2026

To work within the group of Dr Albert Cardona at the MRC Laboratory of Molecular Biology (LMB), within a programme aimed at whole brain connectomics from volume electron microscopy. Specifically, we are seeking to recruit a data scientist with at least a year of experience with densely labelled volume electron microscopy data of nervous tissue. In particular, the candidate will be experienced in developing and applying machine learning frameworks for synapse detection and segmentation, neuron segmentation and proofreading, and quantification of neuronal structures in nanometre-resolution data sets imaged with volume electron microscopy, for the purpose of mapping neuronal wiring diagrams from volume electron microscopy. The ideal candidate will have an academic track record in the form of authored publications in the arXiv, computer vision conferences, and scientific journals, as well as accessible source code repositories demonstrating past work. The ideal candidate will have experience with the python programming language (at version 3+), and in the use of machine learning libraries with python bindings such as keras or pytorch, and has written code available in accessible source code repositories where it can be evaluated by third parties, and has deployed their code to both CPU and GPU clusters, and single servers with multiple GPUs. The ideal candidate has applied all of the above towards the generation of over-segmentations of neuronal structures, and is familiar with approaches for post-processing (proofreading) to automatically agglomerate over-segmented neuron fragments into full arbors, using biologically grounded approaches such as microtobule or endoplasmatic reticulum segmentation for validation.

Position

Tim Vogels

IST, Austria
Vienna, Austria
Jan 14, 2026

TL;DR: If you liked our last NeurIPS paper https://www.biorxiv.org/content/10.1101/2020.10.24.353409v1 and you think you can contribute and imagine, oh, the places we’ll go from here,...from one of the most loivable cities in the world, … DO apply. --- We are looking for at least two scientists to join the vogelslab.org as postdocs at the @ISTAustria near Vienna. The successful candidate will join an ongoing ERC funded project to discover families of spike-induced synaptic plasticity rules by way of numerical derivation. Together, we will define and explore search spaces of biologically plausible plasticity rules expressed e.g., as polynomial expansions or multi layer perceptrons. We aim to compare the results to various experimental and theoretical data sets, including functional spiking network models, human stem-cell derived neural network cultures, in vitro and in vivo experimental data. We are looking for someone who can expand and develop our multipurpose modular library dedicated to optimization of non-differentiable systems. Due to the modularity of the library, the candidate will have extensive freedom regarding which optimization techniques to use and what to learn in which systems, but being a team player will be a crucial skill. Depending on your (flexible, possibly immediate) starting date we can offer up to 4-year contracts with competitive salaries, benefits, vacation time and ample budget for materials and travel in a tranquil and inspiring environment. The Vogelslab, and the IST Austria is located in Klosterneuburg, a historic city northwest of Vienna. The campus is located in the middle of the beautiful landscape of the Vienna Woods, 30 minutes from downtown Vienna, the capital of Austria that consistently scores in the top cities of the world for its high standard of living. If you are interested, send an email with your application to Jessica . deAntonio [at] ist.ac.at. Your application should include your CV, your most relevant publication, contact information for 2 or more references, and a cover letter with - a short description of you & your career - a brief discussion of what you think is the greatest weakness of the above mentioned NeurIPS paper (and maybe how you would go about fixing it). We are looking to build a diverse and interesting environment, so if you bring any qualities that make our lab (and computational neuroscience at large) more diverse than it is right now, please consider applying. We will begin to evaluate applications on the 31st of April and aim to get back to you with a decision within 5 weeks of your application.

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 10, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

AutoMIND: Deep inverse models for revealing neural circuit invariances

Richard Gao
Goethe University
Oct 2, 2025
SeminarPsychology

Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives

Lukasz Piwek
University of Bath & Cumulus Neuroscience Ltd
Jul 7, 2025

Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.

SeminarOpen Source

Open SPM: A Modular Framework for Scanning Probe Microscopy

Marcos Penedo Garcia
Senior scientist, LBNI-IBI, EPFL Lausanne, Switzerland
Jun 24, 2025

OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.

SeminarNeuroscience

Learning Representations of Complex Meaning in the Human Brain

Leila Wehbe
Associate Professor, Machine Learning Department, Carnegie Mellon University
Feb 24, 2025
SeminarNeuroscienceRecording

On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery

André Curtis Trudel
University of Cincinnati
Oct 10, 2024

Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.

SeminarNeuroscience

Trends in NeuroAI - Brain-like topography in transformers (Topoformer)

Nicholas Blauch
Jun 7, 2024

Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Generative models for video games (rescheduled)

Katja Hoffman
Microsoft Research
May 22, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Generative models for video games

Katja Hoffman
Microsoft Research
May 1, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Maintaining Plasticity in Neural Networks

Clare Lyle
DeepMind
Mar 13, 2024

Nonstationarity presents a variety of challenges for machine learning systems. One surprising pathology which can arise in nonstationary learning problems is plasticity loss, whereby making progress on new learning objectives becomes more difficult as training progresses. Networks which are unable to adapt in response to changes in their environment experience plateaus or even declines in performance in highly non-stationary domains such as reinforcement learning, where the learner must quickly adapt to new information even after hundreds of millions of optimization steps. The loss of plasticity manifests in a cluster of related empirical phenomena which have been identified by a number of recent works, including the primacy bias, implicit under-parameterization, rank collapse, and capacity loss. While this phenomenon is widely observed, it is still not fully understood. This talk will present exciting recent results which shed light on the mechanisms driving the loss of plasticity in a variety of learning problems and survey methods to maintain network plasticity in non-stationary tasks, with a particular focus on deep reinforcement learning.

SeminarNeuroscience

Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)

Mehdi Azabou
Feb 22, 2024

Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Machine learning for reconstructing, understanding and intervening on neural interactions

Stefano Panzeri
University Medical Center Hamburg-Eppendorf (UKE)
Jan 11, 2024
SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Reese Kneeland
Jan 5, 2024

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Paul Scotti
Dec 7, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 21, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarArtificial IntelligenceRecording

Mathematical and computational modelling of ocular hemodynamics: from theory to applications

Giovanna Guidoboni
University of Maine
Nov 14, 2023

Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.

SeminarNeuroscience

BrainLM Journal Club

Connor Lane
Sep 29, 2023

Connor Lane will lead a journal club on the recent BrainLM preprint, a foundation model for fMRI trained using self-supervised masked autoencoder training. Preprint: https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1 Tweeprint: https://twitter.com/david_van_dijk/status/1702336882301112631?t=Q2-U92-BpJUBh9C35iUbUA&s=19

SeminarArtificial IntelligenceRecording

Foundation models in ophthalmology

Pearse Keane
University College London and Moorfields Eye Hospital NHS Foundation Trust
Sep 6, 2023

Abstract to follow.

SeminarNeuroscience

Algonauts 2023 winning paper journal club (fMRI encoding models)

Huzheng Yang, Paul Scotti
Aug 18, 2023

Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf

SeminarNeuroscience

1.8 billion regressions to predict fMRI (journal club)

Mihir Tripathy
Jul 28, 2023

Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.

SeminarNeuroscienceRecording

In search of the unknown: Artificial intelligence and foraging

Nathan Wispinski & Paulo Bruno Serafim
University of Alberta & Gran Sasso Science Institute
Jul 11, 2023
SeminarNeuroscience

Decoding mental conflict between reward and curiosity in decision-making

Naoki Honda
Hiroshima University
Jul 11, 2023

Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.

SeminarArtificial IntelligenceRecording

Diverse applications of artificial intelligence and mathematical approaches in ophthalmology

Tiarnán Keenan
National Eye Institute (NEI)
Jun 6, 2023

Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.

SeminarPsychology

How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience

Nicolas Langer
University of Zurich
May 17, 2023

This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.

SeminarNeuroscience

Relations and Predictions in Brains and Machines

Kim Stachenfeld
Deepmind
Apr 7, 2023

Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.

SeminarNeuroscience

Bridging machine learning and mechanistic modelling

Jakob Macke
University of Tubingen
Mar 15, 2023
SeminarArtificial IntelligenceRecording

Deep learning applications in ophthalmology

Aaron Lee
University of Washington
Mar 10, 2023

Deep learning techniques have revolutionized the field of image analysis and played a disruptive role in the ability to quickly and efficiently train image analysis models that perform as well as human beings. This talk will cover the beginnings of the application of deep learning in the field of ophthalmology and vision science, and cover a variety of applications of using deep learning as a method for scientific discovery and latent associations.

SeminarNeuroscienceRecording

Understanding Machine Learning via Exactly Solvable Statistical Physics Models

Lenka Zdeborová
EPFL
Feb 8, 2023

The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.

SeminarNeuroscience

Maths, AI and Neuroscience Meeting Stockholm

Roshan Cools, Alain Destexhe, Upi Bhalla, Vijay Balasubramnian, Dinos Meletis, Richard Naud
Dec 15, 2022

To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.

SeminarNeuroscience

Experimental Neuroscience Bootcamp

Adam Kampff
Voight Kampff, London, UK
Dec 5, 2022

This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.

SeminarNeuroscienceRecording

On the link between conscious function and general intelligence in humans and machines

Arthur Juliani
Microsoft Research
Nov 18, 2022

In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.

SeminarNeuroscienceRecording

From Machine Learning to Autonomous Intelligence

Yann Le Cun
Meta-FAIR & Meta AI
Oct 19, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.

SeminarNeuroscienceRecording

Learning Relational Rules from Rewards

Guillermo Puebla
University of Bristol
Oct 13, 2022

Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we propose that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the relationship between our model with models of relational and analogical reasoning, as well as its limitations and future directions of research.

SeminarNeuroscience

From Machine Learning to Autonomous Intelligence

Yann LeCun
Meta Fair
Oct 10, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf

Conference

Neuromatch 5

Virtual (online)
Sep 27, 2022

Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.

SeminarNeuroscienceRecording

Spontaneous Emergence of Computation in Network Cascades

Galen Wilkerson
Imperial College London
Aug 6, 2022

Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.

SeminarNeuroscience

Attention in Psychology, Neuroscience, and Machine Learning

Grace Lindsay
NYU
Jun 15, 2022
SeminarNeuroscienceRecording

Canonical neural networks perform active inference

Takuya Isomura
RIKEN CBS
Jun 10, 2022

The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.

SeminarNeuroscienceRecording

Hebbian Plasticity Supports Predictive Self-Supervised Learning of Disentangled Representations​

Manu Halvagal​
Friedrich Miescher Institute for Biomedical Research
May 4, 2022

Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised machine learning techniques that shares a deep conceptual connection to Bienenstock-Cooper-Munro (BCM) theory and is consistent with experimentally observed plasticity rules. We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples. In response to altered visual experience, our model qualitatively captures neuronal selectivity changes observed in the monkey inferotemporal cortex in-vivo. Our work suggests a plausible learning rule to drive learning in sensory networks while making concrete testable predictions.

SeminarNeuroscienceRecording

Population coding in the cerebellum: a machine learning perspective

Reza Shadmehr
Johns Hopkins School of Medicine
Apr 6, 2022

The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.

SeminarNeuroscienceRecording

CNStalk: Using machine learning to predict mental health on the basis of brain, behaviour and environment

Andre Marquand
Donders Institute
Mar 31, 2022
SeminarOpen SourceRecording

GeNN

James Knight
University of Sussex
Mar 23, 2022

Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.

SeminarNeuroscience

Interdisciplinary College

Tarek Besold, Suzanne Dikker, Astrid Prinz, Fynn-Mathis Trautwein, Niklas Keller, Ida Momennejad, Georg von Wichert
Mar 7, 2022

The Interdisciplinary College is an annual spring school which offers a dense state-of-the-art course program in neurobiology, neural computation, cognitive science/psychology, artificial intelligence, machine learning, robotics and philosophy. It is aimed at students, postgraduates and researchers from academia and industry. This year's focus theme "Flexibility" covers (but not be limited to) the nervous system, the mind, communication, and AI & robotics. All this will be packed into a rich, interdisciplinary program of single- and multi-lecture courses, and less traditional formats.

SeminarNeuroscienceRecording

Implementing structure mapping as a prior in deep learning models for abstract reasoning

Shashank Shekhar
University of Guelph
Mar 3, 2022

Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 23, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscience

Machine learning for measuring and modeling the motor system

Mackenzie Mathis
EPFL
Feb 16, 2022
SeminarCognitionRecording

Modeling Visual Attention in Neuroscience, Psychology, and Machine Learning

Grace Lindsay
University College London
Feb 15, 2022
ePoster

Machine Learning Approaches Reveal Prominent Behavioral Alterations and Cognitive Dysfunction in a Humanized Alzheimer Model

Stephanie Miller, Nick Kaliss, Pranav Nambiar, Jorge Palop, Kevin Luxem, Yuechen Qiu, Catherine Cai, Kevin Shen, Takashi Saito, Takaomi Saido, Alexander Pico, Reuben Thomas, Stefan Remy

COSYNE 2023

ePoster

Machine learning of functional network and molecular mechanisms in autism spectrum disorder subtypes

Amanda Buch, Petra Vertes, Jakob Seidlitz, So Hyun Kim, Logan Grosenick, Conor Liston

COSYNE 2023

ePoster

Exploring neurophysiological and psychological pain biomarkers with machine learning

Greta Preatoni, Noemi Gozzi, Federico Ciotti, Natalija Katic, Michele Hubli, Petra Schweinhardt, Stanisa Raspopovic
ePoster

Machine learning to personalize cognitive training

Melina Vladisauskas, Laouen M. Belloli, Diego Fernández Slezak, Andrea P. Goldin
ePoster

Machine learning-based dendritic spine segmentation and quantification​

Lina Saveikytė, Kristina Jevdokimenko, Simona Skiotytė, Ilja Karanin, Urte Neniskyte
ePoster

Machine Learning-based Support Network Extraction for Neural State Characterization

Michael Depass, Ignasi Cos, Montse Comas, Oriol Pujol Vila
ePoster

Optogenetic and machine learning strategies for an auditory cortical implant

Antonin Verdier
ePoster

Machine learning-based exploration of long noncoding RNAs linked to perivascular lesions in the brain

Hiyori Edo, Ryodai Itano, Masakazu Umezawa

FENS Forum 2024

ePoster

Enhancing hypothesis testing via interpretable machine learning frameworks

David Steyrl, Alexander Karner, Blanca Thea Maria Spee, Frank Scharnowski

FENS Forum 2024

ePoster

Cognitive and intelligence measures for ADHD identification by machine learning models

Adelia-Solás Martínez-Évora, Paula Díaz Marquiegui, Gianluca Susi, Fernando Maestú

FENS Forum 2024

ePoster

Machine learning identifies potential anti-glioblastoma small molecules

Amirhomayoun Atefi, Ehsan Aboutaleb

FENS Forum 2024

ePoster

A machine learning toolbox to detect and compare sharp-wave ripples across species

Andrea Navas-Olive, Adrian Rubio, Saman Abbaspoor, Kari L Hoffman, Liset M de la Prida

FENS Forum 2024

ePoster

Machine learning-based identification of ultrasonic vocalization subtypes during rat chocolate consumption

Koshi Murata, Yuki Ikedo, Takashi Ryoke, Kazuki Shiotani, Hiroyuki Manabe, Kazuki Kuroda, Hitoshi Yoshimura, Yugo Fukazawa

FENS Forum 2024

ePoster

Prediction of antipsychotic-induced extrapyramidal symptoms in schizophrenia using machine learning

Monika Sharma, Pankaj Yadav, Navratan Suthar

FENS Forum 2024

ePoster

Semi-blind machine learning for fMRI-based predictions of intelligence

Gabriele Lohmann, Samuel Heczko, Lucas Mahler, Qi Wang, Vinod J. Kumar, Michelle Roost, Juergen Jost, Klaus Scheffler

FENS Forum 2024

ePoster

A robust machine learning pipeline for the analysis of complex nightingale songs

Mahalakshmi Ramadas, Jan Clemens, Daniela Vallentin

Bernstein Conference 2024

ePoster

Machine learning approach applied to exploration of neuronal sensorimotor processing during a visuomotor rule-based task performed by a monkey

Laurie Mifsud, Simon Nougaret, Bjorg Kilavik, Matthieu Gilson

FENS Forum 2024

ePoster

Machine learning for individual multivariate fingerprints as predictors of mental well-being in young adults

Andrea Caporali, Alberto Di Domenico, Claudio D'Addario, Francesco de Pasquale

FENS Forum 2024

ePoster

Predicting Math and Story-Related Auditory Tasks Completed in fMRI using a Logistic Regression Machine Learning Model

Mary Bassey

Neuromatch 5

ePoster

Optimization techniques for machine learning based classification involving large-scale neuroscience datasets

Kaustav Mehta

Neuromatch 5

ePoster

Building mechanistic models of neural computations with simulation-based machine learning

Jakob Macke

Bernstein Conference 2024

ePoster

Machine learning and topology classify neuronal morphologies

Lida Kanari

Neuromatch 5

ePoster

Explainable Machine Learning Approach to Investigating Neural Bases of Brain State Classification

Evie Malaia,Sean Borneman,Katie Ford,Brendan Ames

COSYNE 2022