← Back

Deep Learning

Topic spotlight
TopicWorld Wide

deep learning

Discover seminars, jobs, and research tagged with deep learning across World Wide.
105 curated items60 Seminars25 Positions20 ePosters
Updated in 4 days
105 items · deep learning
105 results
SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

Position

Prof Yashar Ahmadian

University of Cambridge
Cambridge, UK
Dec 5, 2025

We are seeking a highly motivated and creative postdoctoral researcher to work on a collaborative project between the labs of Yashar Ahmadian at the Computational and Biological Learning Lab (CBL), Department of Engineering (cbl-cambridge.org), and Zoe Kourtzi (www.abg.psychol.cam.ac.uk) at the Psychology Department, both at the University of Cambridge. The project is fully funded by the UKRI BBSRC and investigates the computational principles and circuit mechanisms underlying human visual perceptual learning, particularly the role of adaptative changes in the balance of cortical excitation and inhibition in this kind of learning. We aim to integrate a few lines of research in our labs, exemplified by the following key publications: Y Ahmadian and KD Miller (2021). What is the dynamical regime of cerebral cortex? Neuron 109 (21), 3373-3391. K Jia, ..., Z Kourtzi (2020). Recurrent Processing Drives Perceptual Plasticity. Current Biology 30 (21), 4177-4187. P Frangou, ..., Z Kourtzi (2019). Learning to optimize perceptual decisions through suppressive interactions in the human brain. Nature Communications 10, 474. Y Ahmadian, DB Rubin, KD Miller (2013). Analysis of the stabilized supralinear network. Neural Computation 25, 1994-2037. T Arakaki, GBarello, Y Ahmadian (2019). Inferring neural circuit structure from datasets of heterogeneous tuning curves. PLOS Comp Bio, 15(4): e1006816. The postdoc will be based in CBL, with free access to the Kourtzi lab in the Psychology department. Apply at:https://www.jobs.ac.uk/job/DBD626/research-assistant-associate-in-computational-neuroscience-fixed-term

Position

Yashar Ahmadian

University of Cambridge
Cambridge, United Kingdom
Dec 5, 2025

We are seeking a highly motivated and creative postdoctoral researcher to work on a collaborative project between the labs of Yashar Ahmadian (https://www.cbl-cambridge.org/ahmadian) at the Computational and Biological Learning Lab (CBL -- https://cbl-cambridge.org, Engineering Department), and Zoe Kourtzi (https://www.abg.psychol.cam.ac.uk/) at the Psychology Department, both at the University of Cambridge. The project is funded by the UKRI BBSRC and investigates the computational principles and circuit mechanisms underlying human visual perceptual learning, particularly the role of adaptive changes in the balance of cortical excitation and inhibition resulting from perceptual learning. We aim to integrate a few lines of research in our labs, exemplified by the following key publications: Y Ahmadian and KD Miller (2021). What is the dynamical regime of cerebral cortex? Neuron 109 (21), 3373-3391. K Jia, ..., Z Kourtzi (2020). Recurrent Processing Drives Perceptual Plasticity. Current Biology 30 (21), 4177-4187. P Frangou, ..., Z Kourtzi (2019). Learning to optimize perceptual decisions through suppressive interactions in the human brain. Nature Communications 10, 474. Y Ahmadian, DB Rubin, KD Miller (2013). Analysis of the stabilized supralinear network. Neural Computation 25, 1994-2037. T Arakaki, GBarello, Y Ahmadian (2019). Inferring neural circuit structure from datasets of heterogeneous tuning curves. PLOS Comp Bio, 15(4): e1006816. The postdoc will be based in CBL, with free access to the Kourtzi lab in the Psychology department.

Position

Prof Yashar Ahmadian

University of Cambridge
Cambridge, UK
Dec 5, 2025

We are seeking a highly motivated and creative postdoctoral researcher to work on a collaborative project between the labs of Yashar Ahmadian at the Computational and Biological Learning Lab (CBL), Department of Engineering (cbl-cambridge.org), and Zoe Kourtzi (www.abg.psychol.cam.ac.uk) at the Psychology Department, both at the University of Cambridge. The project is fully funded by the UKRI BBSRC and investigates the computational principles and circuit mechanisms underlying human visual perceptual learning, particularly the role of adaptative changes in the balance of cortical excitation and inhibition in this kind of learning. We aim to integrate a few lines of research in our labs, exemplified by the following key publications: Y Ahmadian and KD Miller (2021). What is the dynamical regime of cerebral cortex? Neuron 109 (21), 3373-3391. K Jia, ..., Z Kourtzi (2020). Recurrent Processing Drives Perceptual Plasticity. Current Biology 30 (21), 4177-4187. P Frangou, ..., Z Kourtzi (2019). Learning to optimize perceptual decisions through suppressive interactions in the human brain. Nature Communications 10, 474. Y Ahmadian, DB Rubin, KD Miller (2013). Analysis of the stabilized supralinear network. Neural Computation 25, 1994-2037. T Arakaki, GBarello, Y Ahmadian (2019). Inferring neural circuit structure from datasets of heterogeneous tuning curves. PLOS Comp Bio, 15(4): e1006816. The postdoc will be based in CBL, with free access to the Kourtzi lab in the Psychology department.

Position

Dr. Tatsuo Okubo

Chinese Institute for Brain Research, Beijing
Beijing, China
Dec 5, 2025

We are a new group at the Chinese Institute for Brain Research (CIBR), Beijing, which focuses on using modern data science and machine learning tools on neuroscience data. We collaborate with various labs within CIBR to develop models and analysis pipelines to accelerate neuroscience research. We are looking for enthusiastic and talented machine learning engineers and data scientists to join this effort.

Position

Prof. Dr. rer. nat. Kerstin Ritter

Charité - Universitätsmedizin Berlin
Berlin, Germany
Dec 5, 2025

At Charité - Universitätsmedizin Berlin and the Bernstein Center for Computational Neuroscience, we are looking for a motivated and highly qualified PostDoc for methods development at the intersection of explainable machine learning / deep learning and clinical neuroimaging / translational psychiatry. The position will be located in the research groups of Ass. Prof. Kerstin Ritter and Prof. John-Dylan Haynes at Charité Berlin. The main task will be to predict response to cognitive-behavioral psychotherapy in retrospective data and a prospective cohort of patients with internalizing disorders including depression and anxiety from a complex, multimodal data set comprising tabular data as well as imaging data (e.g., clinical data, smartphone data, EEG, structural and functional MRI data). An additional task will be to contribute to the organization and maintenance of the prospective cohort. This study will be one of several projects in the newly established Research Unit 5187 "Precision Psychotherapy" (headed by Prof. Ulrike Lüken).

Position

Dr. Anand Subramoney

Ruhr University Bochum
Bochum, Germany
Dec 5, 2025

The "Theory of Neural Systems" group led by Prof. Dr. Laurenz Wiskott at the Ruhr University Bochum, Germany is looking for an excellent and highly motivated PhD student to work on the topic of scalable machine learning. The student will be co-supervised by Dr. Anand Subramoney. The appointment will be for three years, starting as soon as possible. Salary is 75% of salary scale TV-L E13. The PhD student will work on developing state-of-the-art machine learning models that can scale to billions of parameters with a focus on energy efficiency. Using sparsity and asynchrony as core design principles, the models will also use biological inspiration to achieve these goals. Collaborations with academic and industry groups to use bio-inspired low-energy neuromorphic hardware are encouraged.

Position

Prof Tim C Kietzmann

Institute of Cognitive Science, University of Osnabrück
Osnabrück, Germany
Dec 5, 2025

I am looking to hire multiple postdocs in the space of deep learning and visual computational neuroscience to join us at the institute of cognitive science (University of Osnabrück, Germany). The full-time position is initially for 3 years, but can be extended. You can find out more about our work here: https://www.kietzmannlab.org/ More information about these positions and research in Germany more generally: https://twitter.com/TimKietzmann/status/1482027695856828417 These jobs are not officially advertised yet, so please get in touch with me to start a discussion.

PositionComputational Neuroscience

Prof Wenhao Zhang

UT Southwestern Medical Center
Dallas Texas, USA
Dec 5, 2025

The Computational Neuroscience lab directed by Dr. Wenhao Zhang at the University of Texas Southwestern Medical Center (www.zhang-cnl.org) is currently seeking up to two postdoctoral fellows to study cutting edge problems in computational neuroscience. Research topics include: 1). The neural circuit implementation of normative computation, e.g., Bayesian (causal) inference. 2). Dynamical analysis of recurrent neural circuit models. 3). Modern deep learning methods to solve neuroscience problems. Successful candidates are expected to play an active and independent role in one of our research topics. All projects are strongly encouraged to collaborate with experimental neuroscientists both in UT Southwestern as well as abroad. The initial appointment is for one year with the expectation of extension given satisfactory performance. UT Southwestern provides competitive salary and benefits packages.

Position

Francisco Pereira

Machine Learning Team, National Institute of Mental Health
Bethesda, Maryland, United States of America
Dec 5, 2025

The Machine Learning Team at the National Institute of Mental Health (NIMH) in Bethesda, MD, has an open position for a machine learning research scientist. The NIMH is the leading federal agency for research on mental disorders and neuroscience, and part of the National Institutes of Health (NIH). Our mission is to help NIMH scientists use machine learning methods to address a diverse set of research problems in clinical and cognitive psychology and neuroscience. These range from identifying biomarkers for aiding diagnoses to creating and testing models of mental processes in healthy subjects. Our overarching goal is to use machine learning to improve every aspect of the scientific effort, from helping discover or develop theories to generating actionable results. For more information, please refer to the full ad https://nih-fmrif.github.io/ml/index.html

Position

Mai-Phuong Bo

Stanford University
Palo Alto, USA
Dec 5, 2025

The Stanford Cognitive and Systems Neuroscience Laboratory (scsnl.stanford.edu) invites applications for a postdoctoral fellowship in computational modeling of human cognitive, behavioral, and brain imaging data. The candidate will be involved in multidisciplinary projects to develop and implement novel neuro-cognitive computational frameworks, using multiple cutting-edge methods that may include computational cognitive modeling, Bayesian inference, dynamic brain circuit analysis, and deep neural networks. These projects will span areas including robust identification of cognitive and neurobiological signatures of psychiatric and neurological disorders, and neurodevelopmental trajectories. Clinical disorders under investigation include autism, ADHD, anxiety and mood disorders, learning disabilities, and schizophrenia. The candidate will have access to multiple large datasets and state-of-the-art computational resources, including HPCs and GPUs. Please include a CV and a statement of research interests and have three letters of reference emailed to Prof. Vinod Menon at scsnl.stanford+postdoc@gmail.com.

Position

Prof Vinod Menon

Stanford University
Palo Alto, California, USA
Dec 5, 2025

The Stanford Cognitive and Systems Neuroscience Laboratory (scsnl.stanford.edu) invites applications for a postdoctoral fellowship in computational modeling of human cognitive, behavioral, and brain imaging data. The candidate will be involved in multidisciplinary projects to develop and implement novel neuro-cognitive computational frameworks, using multiple cutting-edge methods that may include computational cognitive modeling, Bayesian inference, dynamic brain circuit analysis, and deep neural networks. These projects will span areas including robust identification of cognitive and neurobiological signatures of psychiatric and neurological disorders, and neurodevelopmental trajectories. Clinical disorders under investigation include autism, ADHD, anxiety and mood disorders, learning disabilities, and schizophrenia. The candidate will have access to multiple large datasets and state-of-the-art computational resources, including HPCs and GPUs. Please include a CV and a statement of research interests and have three letters of reference emailed to Prof. Vinod Menon at scsnl.stanford+postdoc@gmail.com.

Position

Prof Jakob Macke

Chair of Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen
Germany
Dec 5, 2025

How do neural circuits in the human brain recognize objects, persons and actions from complex visual stimuli? To address these questions, we will develop deep convolutional neural networks for modelling how neurons in high-level human brain areas respond to complex visual information. We will make use of a unique dataset of neurophysiological recordings of single-unit activity and field potentials recorded from the medial temporal lobe of epilepsy patients. Our tools will open up avenues for a range of new investigations in cognitive and clinical neuroscience, and may inspire new artificial vision systems. The position is part of a collaboration with the `Dynamic Vision and Learning’ Group at TU Munich (Prof. Dr. Laura Leal-Taixé) and the Cognitive and Clinical Neurophysiology Group at University Hospital Bonn (Prof. Dr. Dr. Mormann). Our group develop computational methods that help scientists interpret empirical data, with a focus on basic and clinical neuroscience research. We want to understand how neuronal networks in the brain process sensory information and control intelligent behaviour, and use this knowledge to develop methods for the diagnosis and therapy of neuronal dysfunction. More details at https://uni-tuebingen.de/en/196976

Position

Dr Guang Yang

Imperial College London
UK, London
Dec 5, 2025

Job summary We are seeking either a Research Assistant or Research Associate to work in this project. The post is funded by the EU H2020 CHAIMELEON project and aims to demonstrate for first time that AI can be used to enhance reproducibility of radiomics features and parameters extracted from cross-vendor and cross-institution CT-MR-PET/MR imaging data. Increasing favourable outcomes suggests that health imaging-based AI approaches can become useful clinical tools in areas such as non-invasive tumour characterisation, prediction of certain tumour features, staging of tumour spread, stratification of patients, selection of most appropriate therapies and clinical prognosis. Duties and responsibilities The main contribution of the group led by Dr Guang Yang to CHAIMELEON project focuses on the investigation and development of novel data quality enhancement and harmonisation, and federated machine learning algorithms for cross-vendor and cross-institution AI powered data repository construction, including the investigation of new strategies in medical imaging acquisition, reconstruction, as well as novel mechanisms to generate adversarial examples and mitigate their effects. The work also includes the analysis of scenarios where data privacy can be enhanced for large multimodal clinical data repository. There will be opportunities to collaborate with other researchers and PhD students in the CHAIMELEON consortium, which includes 18 top-tier UK/EU research institutes and high-tech companies. Essential requirements Applicants must demonstrate as part of their application, how they meet the essential criteria required for the post. To be appointed as a Research Assistant you should have or be close to completion of a PhD degree (or equivalent) in an area pertinent to the subject area, i.e., Computing or Engineering, for the Research Associate position, or a good first degree in related area for the Research Assistant position. You must have excellent verbal and written communication skills, enjoy working in collaboratively and be able to organise your own work with minimal supervision and prioritise work to meet deadlines. Preference will be given to applicants with a proven research record and publications in the relevant areas, including in prestigious machine learning, computer vision and medical image analysis journals and conferences. In particular, Research Associate applicants must hold a PhD in a relevant discipline and all applicants should have equivalent laboratory experience. In addition, you will need to have a strong machine learning background with proven knowledge and track record in one or more of the following research areas and techniques: generative adversarial models, federated or distributed machine learning, deep learning and its applications to medical image reconstruction, denoising and data harmonisation. Further information The post is full time and fixed term for up to 36 months. Candidates who have not yet been officially awarded their PhD will be appointed as a Research Assistant within the salary range £35,477 - £38,566 per annum. Should you require any further details on the role please contact: Dr Guang Yang – g.yang@imperial.ac.uk.

Position

Irina Illina

Lorraine University, LORIA-INRIA, Multispeech Team
Lorraine University, LORIA-INRIA, Multispeech Team, 615 rue du Jardin Botanique, 54600 Villers-les-Nancy Cedex
Dec 5, 2025

The recruited person will have to develop methodologies and tools to obtain high-performance non-native automatic speech recognition in the aeronautical context and more specifically in a (noisy) aircraft cockpit. This project will be based on an end-to-end automatic speech recognition system using wav2vec 2.0. This model is one of the most efficient of the current state of the art. This wav2vec 2.0 model enables self-supervised learning of representations from raw audio data (without transcription).

Position

Prof. (Dr.) Swagatam Das

Institute for Advancing Intelligence (IAI), TCG Centre for Research and Education in Science and Technology (CREST)
Kolkata, India
Dec 5, 2025

We are seeking highly qualified and motivated individuals for the positions of Assistant and Associate Professors in Artificial Intelligence (AI) and Machine Learning (ML). The successful candidate will join our esteemed faculty in the Institute for Advancing Intelligence (IAI), TCG Centre for Research and Education in Science and Technology (CREST), Kolkata, India, and contribute to our commitment to excellence in research, teaching, and academic services.

Position

Benoît Frénay/Jérémy Dodeigne

Human-Centered Machine Learning (HuMaLearn)
Belgium
Dec 5, 2025

We are seeking a motivated postdoctoral researcher to work on an interdisciplinary project at the intersection of deep learning and comparative politics. The candidate will work in the Human-Centered Machine Learning (HuMaLearn) team of Prof. Benoît Frénay and the Belgian and Comparative Politics team of Prof. Jérémy Dodeigne. The goal will be to develop new deep learning methodologies to analyse large corpuses of archive videos that picture political debates. We specifically aim to detect emotions, body language, movements, attitudes, etc. This project is linked to the ERC POLSTYLE project that Jérémy Dodeigne recently obtained, guaranteeing a stimulating research environment. The HuMaLearn team gathers about ten researchers, many of them being actively working in deep learning, but not only and with a keen openness to interdisciplinarity.

Position

N/A

University of Manchester
University of Manchester
Dec 5, 2025

1) Lecturer/Senior Lecturer (Assoc/Asst Prof) in Machine Learning: The University of Manchester is making a strategic investment in fundamentals of AI, to complement its existing strengths in AI applications across several prominent research fields in the University. Applications are welcome in any area of the fundamentals of machine learning, in particular probabilistic modelling, deep learning, reinforcement learning, causal modelling, human-in-the-loop ML, explainable AI, ethics, privacy and security. This position is meant to contribute to machine learning methodologies and not purely to their applications. You will be located in the Department of Computer Science and, in addition to the new centre for Fundamental AI research, you will belong to a large community of machine learning, data science and AI researchers. 2) Programme Manager – Centre for AI Fundamentals: The University of Manchester is seeking to appoint an individual with a strategic mindset and a track record of building and leading collaborative relationships and professional networks, expertise in a domain ideally related to artificial intelligence, excellent communication and interpersonal skills, experience in managing high-performing teams, and demonstrable ability to support the preparation of large, complex grant proposals to take up the role of Programme Manager for the Centre for AI Fundamentals. The successful candidate will play a major role in developing and shaping the Centre, working closely with its Director to grow the Centre and plan and deliver an exciting programme of activities, including leading key science translational activity and development of use cases in the Centre’s key domains, partnership development, bid writing, resource management, impact and public engagement strategies.

Position

Benoît Frénay/Jérémy Dodeigne

Human-Centered Machine Learning (HuMaLearn)
Belgium
Dec 5, 2025

The postdoctoral researcher will work on an interdisciplinary project at the intersection of deep learning and comparative politics. The candidate will work in the Human-Centered Machine Learning (HuMaLearn) team of Prof. Benoît Frénay and the Belgian and Comparative Politics team of Prof. Jérémy Dodeigne. The goal will be to develop new deep learning methodologies to analyse large corpuses of archive videos that picture political debates. We specifically aim to detect emotions, body language, movements, attitudes, etc. This project is linked to the ERC POLSTYLE project that Jérémy Dodeigne recently obtained, guaranteeing a stimulating research environment.

Position

Prof Yashar Ahmadian

University of Cambridge
Cambridge, UK
Dec 5, 2025

We are seeking a highly motivated and creative postdoctoral researcher to work on a collaborative project between the labs of Yashar Ahmadian at the Computational and Biological Learning Lab (CBL), Department of Engineering (cbl-cambridge.org), and Zoe Kourtzi (www.abg.psychol.cam.ac.uk) at the Psychology Department, both at the University of Cambridge. The project is fully funded by the UKRI BBSRC and investigates the computational principles and circuit mechanisms underlying human visual perceptual learning, particularly the role of adaptative changes in the balance of cortical excitation and inhibition in this kind of learning. We aim to integrate a few lines of research in our labs, exemplified by the following key publications: Y Ahmadian and KD Miller (2021). What is the dynamical regime of cerebral cortex? Neuron 109 (21), 3373-3391. K Jia, ..., Z Kourtzi (2020). Recurrent Processing Drives Perceptual Plasticity. Current Biology 30 (21), 4177-4187. P Frangou, ..., Z Kourtzi (2019). Learning to optimize perceptual decisions through suppressive interactions in the human brain. Nature Communications 10, 474. Y Ahmadian, DB Rubin, KD Miller (2013). Analysis of the stabilized supralinear network. Neural Computation 25, 1994-2037. T Arakaki, GBarello, Y Ahmadian (2019). Inferring neural circuit structure from datasets of heterogeneous tuning curves. PLOS Comp Bio, 15(4): e1006816. The postdoc will be based in CBL, with free access to the Kourtzi lab in the Psychology department.

SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 15, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscienceRecording

On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery

André Curtis Trudel
University of Cincinnati
Oct 9, 2024

Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.

SeminarNeuroscience

Probing neural population dynamics with recurrent neural networks

Chethan Pandarinath
Emory University and Georgia Tech
Jun 11, 2024

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.

SeminarNeuroscience

Mapping the Brain‘s Visual Representations Using Deep Learning

Katrin Franke
Byers Eye Institute, Department of Ophthalmology, Stanford Medicine
Jun 5, 2024
SeminarArtificial IntelligenceRecording

Mathematical and computational modelling of ocular hemodynamics: from theory to applications

Giovanna Guidoboni
University of Maine
Nov 13, 2023

Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.

SeminarArtificial IntelligenceRecording

Foundation models in ophthalmology

Pearse Keane
University College London and Moorfields Eye Hospital NHS Foundation Trust
Sep 5, 2023

Abstract to follow.

SeminarArtificial IntelligenceRecording

Diverse applications of artificial intelligence and mathematical approaches in ophthalmology

Tiarnán Keenan
National Eye Institute (NEI)
Jun 5, 2023

Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.

SeminarNeuroscience

Learning to see stuff

Roland W. Fleming
Giessen University
Mar 12, 2023

Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.

SeminarArtificial IntelligenceRecording

Deep learning applications in ophthalmology

Aaron Lee
University of Washington
Mar 9, 2023

Deep learning techniques have revolutionized the field of image analysis and played a disruptive role in the ability to quickly and efficiently train image analysis models that perform as well as human beings. This talk will cover the beginnings of the application of deep learning in the field of ophthalmology and vision science, and cover a variety of applications of using deep learning as a method for scientific discovery and latent associations.

SeminarNeuroscienceRecording

Understanding Machine Learning via Exactly Solvable Statistical Physics Models

Lenka Zdeborová
EPFL
Feb 7, 2023

The affinity between statistical physics and machine learning has a long history. I will describe the main lines of this long-lasting friendship in the context of current theoretical challenges and open questions about deep learning. Theoretical physics often proceeds in terms of solvable synthetic models, I will describe the related line of work on solvable models of simple feed-forward neural networks. I will highlight a path forward to capture the subtle interplay between the structure of the data, the architecture of the network, and the optimization algorithms commonly used for learning.

SeminarNeuroscienceRecording

Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties

Ilenna Jones
University of Pennsylvania
Dec 6, 2022

Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.

SeminarNeuroscienceRecording

On the link between conscious function and general intelligence in humans and machines

Arthur Juliani
Microsoft Research
Nov 17, 2022

In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.

SeminarNeuroscienceRecording

Spiking Deep Learning with SpikingJelly

Yonghong Tian
Peking University
Nov 9, 2022
SeminarNeuroscienceRecording

Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing

A. Subramoney
University of Bochum
Nov 8, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.

SeminarNeuroscienceRecording

Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity

Thomas Limbacher
TU Graz
Nov 8, 2022

Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.

SeminarNeuroscienceRecording

No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit

Rylan Schaeffer
Fiete lab, MIT
Nov 1, 2022

Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.

SeminarNeuroscienceRecording

Building System Models of Brain-Like Visual Intelligence with Brain-Score

Martin Schrimpf
MIT
Oct 4, 2022

Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.

SeminarNeuroscienceRecording

General purpose event-based architectures for deep learning

Anand Subramoney
Institute for Neural Computation
Oct 4, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST

SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 21, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscienceRecording

Cortex-dependent corrections as the mouse tongue reaches for and misses targets

Brendan Ito & Teja Bollu
Cornell University, USA & Salk Institute, USA
Apr 19, 2022

Brendan Ito (Cornell University, USA) and Teja Bollu (Salk Institute, USA) share unique insights into rapid online motor corrections during mouse licking, analogous to primate goal-oriented reaching. Techniques covered include large-scale single unit recording during behaviour with optogenetics, and a deep-learning-based neural network to resolve 3D tongue kinematics during licking.

SeminarNeuroscienceRecording

Artificial Intelligence and Racism – What are the implications for scientific research?

ALBA Network
Mar 6, 2022

As questions of race and justice have risen to the fore across the sciences, the ALBA Network has invited Dr Shakir Mohamed (Senior Research Scientist at DeepMind, UK) to provide a keynote speech on Artificial Intelligence and racism, and the implications for scientific research, that will be followed by a discussion chaired by Dr Konrad Kording (Department of Neuroscience at University of Pennsylvania, US - neuromatch co-founder)

SeminarNeuroscienceRecording

Implementing structure mapping as a prior in deep learning models for abstract reasoning

Shashank Shekhar
University of Guelph
Mar 2, 2022

Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.

SeminarNeuroscienceRecording

Deep Internal learning -- Deep Visual Inference without prior examples

Michal Irani
Weizmann Inst.
Dec 20, 2021
SeminarNeuroscienceRecording

NMC4 Keynote:

Yuki Kamitani
Kyoto University and ATR
Dec 1, 2021

The brain represents the external world through the bottleneck of sensory organs. The network of hierarchically organized neurons is thought to recover the causes of sensory inputs to reconstruct the reality in the brain in idiosyncratic ways depending on individuals and their internal states. How can we understand the world model represented in an individual’s brain, or the neuroverse? My lab has been working on brain decoding of visual perception and subjective experiences such as imagery and dreaming using machine learning and deep neural network representations. In this talk, I will outline the progress of brain decoding methods and present how subjective experiences are externalized as images and how they could be shared across individuals via neural code conversion. The prospects of these approaches in basic science and neurotechnology will be discussed.

SeminarNeuroscienceRecording

Deep kernel methods

Laurence Aitchison
University of Bristol
Nov 24, 2021

Deep neural networks (DNNs) with the flexibility to learn good top-layer representations have eclipsed shallow kernel methods without that flexibility. Here, we take inspiration from deep neural networks to develop a new family of deep kernel method. In a deep kernel method, there is a kernel at every layer, and the kernels are jointly optimized to improve performance (with strong regularisation). We establish the representational power of deep kernel methods, by showing that they perform exact inference in an infinitely wide Bayesian neural network or deep Gaussian process. Next, we conjecture that the deep kernel machine objective is unimodal, and give a proof of unimodality for linear kernels. Finally, we exploit the simplicity of the deep kernel machine loss to develop a new family of optimizers, based on a matrix equation from control theory, that converges in around 10 steps.

SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 4, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscienceRecording

StereoSpike: Depth Learning with a Spiking Neural Network

Ulysse Rancon
University of Bordeaux
Nov 1, 2021

Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to train StereoSpike in a supervised manner, using surrogate gradient descent. We propose a novel readout paradigm to obtain a dense analog prediction –the depth of each pixel– from the spikes of the decoder. We demonstrate that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy. To the best of our knowledge, it is the first time that such a large-scale regression problem is solved by a fully spiking network. Finally, we show that low firing rates (<10%) can be obtained via regularization, with a minimal cost in accuracy. This means that StereoSpike could be implemented efficiently on neuromorphic chips, opening the door for low power real time embedded systems.

SeminarNeuroscience

Learning to see Stuff

Kate Storrs
Justus Liebig University Giessen
Oct 26, 2021

Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. How does the brain learn to see properties of the world—like the glossiness of a surface—that cannot be measured by any other senses? Recent advances in unsupervised deep learning may help shed light on material perception. I will show how an unsupervised deep neural network trained on an artificial environment of surfaces that have different shapes, materials and lighting, spontaneously comes to encode those factors in its internal representations. Most strikingly, the model makes patterns of errors in its perception of material that follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how many perceptual dimensions form, in material perception and beyond.

SeminarNeuroscienceRecording

On the implicit bias of SGD in deep learning

Amir Globerson
Tel Aviv University
Oct 19, 2021

Tali's work emphasized the tradeoff between compression and information preservation. In this talk I will explore this theme in the context of deep learning. Artificial neural networks have recently revolutionized the field of machine learning. However, we still do not have sufficient theoretical understanding of how such models can be successfully learned. Two specific questions in this context are: how can neural nets be learned despite the non-convexity of the learning problem, and how can they generalize well despite often having more parameters than training data. I will describe our recent work showing that gradient-descent optimization indeed leads to 'simpler' models, where simplicity is captured by lower weight norm and in some cases clustering of weight vectors. We demonstrate this for several teacher and student architectures, including learning linear teachers with ReLU networks, learning boolean functions and learning convolutional pattern detection architectures.

SeminarNeuroscienceRecording

Credit Assignment in Neural Networks through Deep Feedback Control

Alexander Meulemans
Institute of Neuroinformatics, University of Zürich and ETH Zürich
Sep 29, 2021

The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motives, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.

SeminarNeuroscienceRecording

Learning the structure and investigating the geometry of complex networks

Robert Peach and Alexis Arnaudon
Imperial College
Sep 23, 2021

Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 26, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarPsychology

Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations

Brad Wyble
Penn State University
Jul 6, 2021

Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.

SeminarNeuroscienceRecording

Zero-shot visual reasoning with probabilistic analogical mapping

Taylor Webb
UCLA
Jun 30, 2021

There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.

SeminarNeuroscience

Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Neural Dynamics & Computation Lab, Stanford University
Jun 16, 2021

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

Transforming task representations

Andrew Lampinen
DeepMind
May 12, 2021

Humans can adapt to a novel task on our first try. By contrast, artificial intelligence systems often require immense amounts of data to adapt. In this talk, I will discuss my recent work (https://www.pnas.org/content/117/52/32970) on creating deep learning systems that can adapt on their first try by exploiting relationships between tasks. Specifically, the approach is based on transforming a representation for a known task to produce a representation for the novel task, by inferring and then using a higher order function that captures a relationship between the tasks. This approach can be interpreted as a type of analogical reasoning. I will show that task transformation can allow systems to adapt to novel tasks on their first try in domains ranging from card games, to mathematical objects, to image classification and reinforcement learning. I will discuss the analogical interpretation of this approach, an analogy between levels of abstraction within the model architecture that I refer to as homoiconicity, and what this work might suggest about using deep-learning models to infer analogies more generally.

SeminarNeuroscienceRecording

Artificial neural networks do not adequately mimic whatever is going on in the real brain

Danko Nikolić
evocenta GmbH
Apr 21, 2021

One may think that Deep Learning technology works in ways that are similar to the human brain. This is not really true. Our best AI technology still does not mimic the brain sufficiently well to be a match in intelligence. I will describe seven differences on how our minds work in ways diametrically opposite to those of Deep Learning technology.

SeminarNeuroscienceRecording

Mental Simulation, Imagination, and Model-Based Deep RL

Jessica Hamrick
Deepmind
Apr 8, 2021

Mental simulation—the capacity to imagine what will or what could be—is a salient feature of human cognition, playing a key role in a wide range of cognitive abilities. In artificial intelligence, the last few years have seen the development of methods which are analogous to mental models and mental simulation. In this talk, I will discuss recent methods in deep learning for constructing such models from data and learning to use them via reinforcement learning, and compare such approaches to human mental simulation. While a number of challenges remain in matching the capacity of human mental simulation, I will highlight some recent progress on developing more compositional and efficient model-based algorithms through the use of graph neural networks and tree search.

SeminarOpen SourceRecording

An open-source experimental framework for automation of cell biology experiments

Anton Nikolaev and Pavel Katunin
Department of Biomedical Sciences, University of Sheffield; ITMO University, St. Petersburg, Russia and MEL Science, London UK
Apr 1, 2021

Modern biological methods often require a large number of experiments to be conducted. For example, dissecting molecular pathways involved in a variety of biological processes in neurons and non-excitable cells requires high-throughput compound library or RNAi screens. Another example requiring large datasets - modern data analysis methods such as deep learning. These have been successfully applied to a number of biological and medical questions. In this talk we will describe an open-source platform allowing such experiments to be automated. The platform consists of an XY stage, perfusion system and an epifluorescent microscope with autofocusing. It is extremely easy to build and can be used for different experimental paradigms, ranging from immunolabeling and routine characterisation of large numbers of cell lines to high-throughput imaging of fluorescent reporters.

SeminarNeuroscienceRecording

Hebbian learning, its inference, and brain oscillation

Sukbin Lim
NYU Shanghai
Mar 23, 2021

Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.

SeminarNeuroscienceRecording

Do deep learning latent spaces resemble human brain representations?

Rufin VanRullen
Centre de Recherche Cerveau et Cognition (CERCO)
Mar 11, 2021

In recent years, artificial neural networks have demonstrated human-like or super-human performance in many tasks including image or speech recognition, natural language processing (NLP), playing Go, chess, poker and video-games. One remarkable feature of the resulting models is that they can develop very intuitive latent representations of their inputs. In these latent spaces, simple linear operations tend to give meaningful results, as in the well-known analogy QUEEN-WOMAN+MAN=KING. We postulate that human brain representations share essential properties with these deep learning latent spaces. To verify this, we test whether artificial latent spaces can serve as a good model for decoding brain activity. We report improvements over state-of-the-art performance for reconstructing seen and imagined face images from fMRI brain activation patterns, using the latent space of a GAN (Generative Adversarial Network) model coupled with a Variational AutoEncoder (VAE). With another GAN model (BigBiGAN), we can decode and reconstruct natural scenes of any category from the corresponding brain activity. Our results suggest that deep learning can produce high-level representations approaching those found in the human brain. Finally, I will discuss whether these deep learning latent spaces could be relevant to the study of consciousness.

SeminarNeuroscience

A machine learning way to analyse white matter tractography streamlines / Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI

Dr Shenjun Zhong and Dr Kamlesh Pawar
Monash Biomedical Imaging
Mar 10, 2021

1. Embedding is all you need: A machine learning way to analyse white matter tractography streamlines - Dr Shenjun Zhong, Monash Biomedical Imaging Embedding white matter streamlines with various lengths into fixed-length latent vectors enables users to analyse them with general data mining techniques. However, finding a good embedding schema is still a challenging task as the existing methods based on spatial coordinates rely on manually engineered features, and/or labelled dataset. In this webinar, Dr Shenjun Zhong will discuss his novel deep learning model that identifies latent space and solves the problem of streamline clustering without needing labelled data. Dr Zhong is a Research Fellow and Informatics Officer at Monash Biomedical Imaging. His research interests are sequence modelling, reinforcement learning and federated learning in the general medical imaging domain. 2. Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI - Dr Kamlesh Pawar, Monash Biomedical imaging Magnetic Resonance Imaging (MRI) is a widely used imaging modality in clinics and research. Although MRI is useful it comes with an overhead of longer scan time compared to other medical imaging modalities. The longer scan times also make patients uncomfortable and even subtle movements during the scan may result in severe motion artifact in the images. In this seminar, Dr Kamlesh Pawar will discuss how artificial intelligence techniques can reduce scan time and correct motion artifacts. Dr Pawar is a Research Fellow at Monash Biomedical Imaging. His research interest includes deep learning, MR physics, MR image reconstruction and computer vision.

SeminarNeuroscienceRecording

Cross Domain Generalisation in Humans and Machines

Leonidas Alex Doumas
The University of Edinburgh
Feb 3, 2021

Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.

SeminarNeuroscience

The Spatial Memory Pipeline: a deep learning model of egocentric to allocentric understanding in mammalian brains

Benigno Uria
DeepMind
Jan 12, 2021
SeminarNeuroscienceRecording

A function approximation perspective on neural representations

Cengiz Pehlevan
Harvard University
Dec 1, 2020

Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.

SeminarNeuroscience

Crowding and the Architecture of the Visual System

Adrien Doerig
Laboratory of Psychophysics, BMI, EPFL
Dec 1, 2020

Classically, vision is seen as a cascade of local, feedforward computations. This framework has been tremendously successful, inspiring a wide range of ground-breaking findings in neuroscience and computer vision. Recently, feedforward Convolutional Neural Networks (ffCNNs), inspired by this classic framework, have revolutionized computer vision and been adopted as tools in neuroscience. However, despite these successes, there is much more to vision. I will present our work using visual crowding and related psychophysical effects as probes into visual processes that go beyond the classic framework. In crowding, perception of a target deteriorates in clutter. We focus on global aspects of crowding, in which perception of a small target is strongly modulated by the global configuration of elements across the visual field. We show that models based on the classic framework, including ffCNNs, cannot explain these effects for principled reasons and identify recurrent grouping and segmentation as a key missing ingredient. Then, we show that capsule networks, a recent kind of deep learning architecture combining the power of ffCNNs with recurrent grouping and segmentation, naturally explain these effects. We provide psychophysical evidence that humans indeed use a similar recurrent grouping and segmentation strategy in global crowding effects. In crowding, visual elements interfere across space. To study how elements interfere over time, we use the Sequential Metacontrast psychophysical paradigm, in which perception of visual elements depends on elements presented hundreds of milliseconds later. We psychophysically characterize the temporal structure of this interference and propose a simple computational model. Our results support the idea that perception is a discrete process. Together, the results presented here provide stepping-stones towards a fuller understanding of the visual system by suggesting architectural changes needed for more human-like neural computations.

SeminarNeuroscienceRecording

Making neural nets simple enough to succeed at universal relational generalization

Kenneth Kurtz
Binghamton University
Nov 16, 2020

Traditional brain-style (connectionist) approaches basically hit a wall when it comes to relational cognition. As an alternative to the well-known approaches of structured connectionism and deep learning, I present an engine for relational pattern recognition based on minimalist reinterpretations of first principles of connectionism. Results of computational experiments will be discussed on problems testing relational learning and universal generalization.

SeminarNeuroscience

Biomedical Image and Genetic Data Analysis with machine learning; applications in neurology and oncology

Wiro Niessen
Erasmus MC
Nov 8, 2020

In this presentation I will show the opportunities and challenges of big data analytics with AI techniques in medical imaging, also in combination with genetic and clinical data. Both conventional machine learning techniques, such as radiomics for tumor characterization, and deep learning techniques for studying brain ageing and prognosis in dementia, will be addressed. Also the concept of deep imaging, a full integration of medical imaging and machine learning, will be discussed. Finally, I will address the challenges of how to successfully integrate these technologies in daily clinical workflow.

SeminarNeuroscienceRecording

Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Stanford University
Oct 15, 2020

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition.  However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling.  We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process.  In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

Abstraction and Analogy in Natural and Artificial Intelligence

Melanie Mitchell
Santa Fe Institute
Oct 7, 2020

In 1955, John McCarthy and colleagues proposed an AI summer research project with the following aim: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” More than six decades later, all of these research topics remain open and actively investigated in the AI community. While AI has made dramatic progress over the last decade in areas such as vision, natural language processing, and robotics, current AI systems still almost entirely lack the ability to form humanlike concepts and abstractions. Some cognitive scientists have proposed that analogy-making is a central mechanism for conceptual abstraction and understanding in humans. Douglas Hofstadter called analogy-making “the core of cognition”, and Hofstadter and co-author Emmanuel Sander noted, “Without concepts there can be no thought, and without analogies there can be no concepts.” In this talk I will reflect on the role played by analogy-making at all levels of intelligence, and on prospects for developing AI systems with humanlike abilities for abstraction and analogy.

SeminarNeuroscience

Geometric deep learning on graphs and manifolds

Michael Bronstein
Imperial College London
Sep 15, 2020
ePoster

Object detection with deep learning and attention feedback loops

Rene Larisch, Fred Hamker

Bernstein Conference 2024

ePoster

Single-phase deep learning in cortico-cortical networks

COSYNE 2022

ePoster

Single-phase deep learning in cortico-cortical networks

COSYNE 2022

ePoster

Automated neuron tracking inside moving and deforming animals using deep learning and targeted augmentation

Mahsa Barzegar Keshteli, Vladislav Susoy, Core Francisco Park, Kseniia Korchagina, Ariane Delrocq, Aravinthan D. T. Samuel, Sahand Jamal Rahi

COSYNE 2023

ePoster

Human Neural Dynamics of Elements in Natural Conversation – A Deep Learning Approach

Jing Cai, Alex Hadjinicolaou, Angelique C Paulk, Ziv Williams, Sydney Cash

COSYNE 2023

ePoster

Deep learning-based electrode localization from local field potentials

Xingyun Wang, Richard Naud

COSYNE 2025

ePoster

A deep learning framework for center-periphery visual processing in mouse visual cortex

Yuchen Hou, Marius Schneider, Joe Canzano, Jing Peng, Spencer Smith, Michael Beyeler

COSYNE 2025

ePoster

Hacking vocal learning with deep learning: flexible real-time perturbation of zebra finch song

Elizabeth O'Gorman, Drew Schreiner, Richard Mooney, John Pearson

COSYNE 2025

ePoster

A deep learning approach for the recognition of behaviors in the forced swim test

Andrea Della Valle, Sara De Carlo, Francesca Petetta, Gregorio Sonsini, Sikandar Ali, Roberto Ciccocioppo, Massimo Ubaldi

FENS Forum 2024

ePoster

Deep learning-driven compression of extracellular neural signals

János Rokai, István Ulbert, Gergely Márton

FENS Forum 2024

ePoster

DeepD3 - A deep learning framework for detection of dendritic spines and dendrites

Andreas Kist, Martin H P Fernholz, Drago A Guggiana Nilo, Tobias Bonhoeffer

FENS Forum 2024

ePoster

DeepLabCut 3.0: Efficient deep learning for single and multi-animal pose tracking and identification

Niels Poulsen, Anastasiia Filipova, Shaokai Ye, Lucas Stoffl, Mu Zhou, Quentin Mace, Konrad Danielewski, Anna Teruel-Sanchis, Riza Rae Pineda, Jessy Lauer, Timokleia Kousi, Alexander Mathis, Mackenzie Weygandt Mathis

FENS Forum 2024

ePoster

Describing neural encoding from large-scale brain recordings: A deep learning model of the central auditory system

Fotios Drakopoulos, Yiqing Xia, Andreas Fragner, Nicholas A Lesica

FENS Forum 2024

ePoster

An explainable deep learning model for the identification of layers and areas in the primate cerebral cortex

Piotr Majka, Adam Datta, Agata Kulesza, Sylwia Bednarek, Marcello Rosa

FENS Forum 2024

ePoster

Interpretable representations of neural dynamics using geometric deep learning

Adam Gosztolai, Robert Peach, Alexis Arnaudon, Mauricio Barahona, Pierre Vandergheyst

FENS Forum 2024

ePoster

A MATLAB-based deep learning tool for fast call classification and interaction in vocal communication

Kathrin Kugler, Antoni Woss, Jimmy Lapierre, Uwe Firzlaff

FENS Forum 2024

ePoster

Re-analysing the Allen Gene Expression ISH dataset with deep learning

Harry Carey, Sébastien Piluso, Maja Puchades, Daniel Keller, Jan G. Bjaalie

FENS Forum 2024

ePoster

Virtual reality empowered deep learning analysis of brain cells

Doris Kaltenecker, Rami Al-Maskari, Moritz Negwer, Luciano Hoeher, Kofler Florian, Shan Zhao, Mihail Todorov, Zhouyi Rong, Johannes Christian Paetzold, Benedikt Wiestler, Marie Piraud, Daniel Rueckert, Julia Geppert, Pauline Morigny, Maria Rohm, Bjoern H. Menze, Stephan Herzig, Mauricio Berriel Diaz, Ali Ertürk

FENS Forum 2024

ePoster

In vitro analysis of drug-induced neuron degeneration by morphological deep learning on a novel microphysiological system

Ikuro Suzuki, Xiaobo Han, Kazuki Matsuda, Naoki Matsuda, Makoto Yamanaka

FENS Forum 2024

ePoster

What does my network learn? Assessing the interpretability of deep learning for neural signals

Pinar Göktepe-Kavis, Florence M Aellen, Sigurd L Alnes, Athina Tzovara

FENS Forum 2024