← Back

Artificial Neural Networks

Topic spotlight
TopicWorld Wide

artificial neural networks

Discover seminars, jobs, and research tagged with artificial neural networks across World Wide.
41 curated items24 Seminars9 Positions8 ePosters
Updated about 18 hours ago
41 items · artificial neural networks
41 results
PositionComputational Neuroscience

Prof Grace Lindsay

New York University
New York, USA
Dec 5, 2025

I will be looking for students through the Cognition and Perception program at NYU: https://as.nyu.edu/departments/psychology/graduate/phd-cognition-perception.html Projects will fit into the research descriptions provided here: https://lindsay-lab.github.io/research/

Position

Prof Marcel van Gerven

Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen
Nijmegen, Netherlands
Dec 5, 2025

Title: Bio-inspired learning algorithms for efficient and robust neural network models with a focus on neuromorphic computing and control of complex environments This project is embedded within the European Laboratory for Learning and Intelligent Systems (ELLIS) Unit in Radboud AI. The aim is to develop highly effective algorithms for training artificial neural network models which make use of (biologically plausible) information local to the individual nodes of the network. This locality will allow efficient implementation and testing of these algorithms on neuromorphic computing systems. The efficacy of the algorithms will be tested in simulations. This will be done in the context of relevant reinforcement learning tasks using a control system theoretic approach to interaction with modelled environments. This work entails a theoretical (mathematical) development of state-of-the-art biologically plausible learning rules and reinforcement learning strategies alongside implementation/testing of these algorithms in the form of neuromorphic computing algorithms and agent-based artificial neural network modelling. For more information and to apply see: https://www.ru.nl/english/working-at/vacature/details-vacature/?recid=1147553&pad=%2fenglish&doel=embed&taal=uk

Position

Dr. Robert Legenstein

Graz University of Technology
Graz University of Technology, Austria
Dec 5, 2025

The successful candidate will work on learning algorithms for spiking neural networks in the international consortium of the international project 'Scalable Learning Neuromorphics'. We will develop in this project learning algorithms for spiking neural networks for memristive hardware implementations. This project aims to develop scalable Spiking Neural Networks (SNNs) by leveraging the integration of 3D memristors, thereby overcoming limitations of conventional Artificial Neural Networks (ANNs). Positioned at the intersection of artificial intelligence and brain-inspired computing, the initiative focuses on innovative SNN training methods, optimizing recurrent connections, and designing dedicated hardware accelerators. These advancements will uniquely contribute to scalability and energy efficiency. The endeavor addresses key challenges in event-based processing and temporal coding, aiming for substantial performance gains in both software and hardware implementations of artificial intelligence systems. Expected research outputs include novel algorithms, optimization methods, and memristor-based hardware architectures, with broad applications and potential for technology transfer.

PositionNeuroscience

N/A

Center for Neuroscience and Cell Biology of the University of Coimbra (CNC-UC)
Coimbra and Cantanhede, University of Coimbra
Dec 5, 2025

The PostDoctoral researcher will conduct research activities in modelling and simulation of reward-modulated prosocial behavior and decision-making. The position is part of a larger effort to uncover the computational and mechanistic bases of prosociality and empathy at the behavioral and circuit levels. The role involves working at the interface between experimental data (animal behavior and electrophysiology) and theoretical modelling, with an emphasis on Multi-Agent Reinforcement Learning and neural population dynamics.

Position

Prof. Angela Yu

Technical University of Darmstadt
TU Darmstadt, Germany
Dec 5, 2025

Prof. Angela Yu recently moved from UCSD to TU Darmstadt as the Alexander von Humboldt AI Professor, and has a number of PhD and postdoc positions available in her growing “Computational Modeling of Intelligent Systems” research group. Applications are solicited from highly motivated and qualified candidates, who are interested in interdisciplinary research at the intersection of natural and artificial intelligence. Prof. Yu’s group uses mathematically rigorous and algorithmically diverse tools to understand the nature of representation and computations that give rise to intelligent behavior. There is a fair amount of flexibility in the actual choice of project, as long as the project excites both the candidate and Prof. Yu. For example, Prof. Yu is currently interested in investigating scientific questions such as: How is socio-emotional intelligence similar or different from cognitive intelligence? Is there a fundamental tradeoff, given the prevalence of autism among scientists and engineers? How can AI be taught socio-emotional intelligence? How are artificial intelligence (e.g. as demonstrated by large language models) and natural intelligence (e.g. as measured by IQ tests) similar or different in their underlying representation or computations? What roles do intrinsic motivations such as curiosity and computational efficiency play in intelligent systems? How can insights about artificial intelligence improve the understanding and augmentation of human intelligence? Are capacity limitations with respect to attention and working memory a feature or a bug in the brain? How can AI system be enhanced by attention or WM? More broadly, Prof. Yu’s group employs and develops diverse machine learning and mathematical tools, e.g. Bayesian statistical modeling, control theory, reinforcement learning, artificial NN, and information theory, to explain various aspects of cognition important for intelligence: perception, attention, decision-making, learning, cognitive control, active sensing, economic behavior, and social interactions. Participants who have experience with two or more of the technical areas, and/or one or more of the application areas, are highly encouraged to apply. As part of the Centre for Cognitive Science at TU Darmstadt, the Hessian AI Center, as well as the Computer Science Department, Prof. Yu’s group members are encouraged and expected to collaborate extensively with preeminent researchers in cognitive science and AI, both nearby and internationally. All positions will be based at TU Darmstadt, Germany. Starting dates for the positions are flexible. Salaries are commensurate with experience and expertise, and highly competitive with respect to U.S. and European standards. The working language in the group and within the larger academic community is English. Fluency in German is not required; the university provides free German lessons for interested scientific staff.

Position

N/A

Ruhr-University Bochum, German Primate Center
Ruhr-University Bochum and the German Primate Center
Dec 5, 2025

We are looking for a highly motivated PhD student to study neural mechanisms of high-dimensional visual category learning. The lab generally seeks to understand the cortical basis and computational principles of perception and experience-dependent plasticity in the brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in rodents and non-human primates, and fMRI and ECoG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The PhD student will have access to a new imaging center with a dedicated 3T research scanner, electrophysiology, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan. The Department of Cognitive Neurobiology of Caspar Schwiedrzik at Ruhr-University Bochum is looking for an outstanding PhD student interested in studying the neural basis of mental flexibility. The project investigates neural mechanisms of high-dimensional visual category learning, utilizing functional magnetic resonance imaging (fMRI) in combination with computational modelling and behavioral testing in humans. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The PhD student’s project will focus on developing new category learning paradigms to investigate the neural basis of flexible multi-task learning in humans using fMRI. In addition, the PhD student will cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in non-human primates.

Position

Caspar Schwiedrzik

Ruhr-University Bochum, German Primate Center
Ruhr-University Bochum and the German Primate Center
Dec 5, 2025

We are looking for a highly motivated PhD student to study neural mechanisms of high-dimensional visual category learning. The lab generally seeks to understand the cortical basis and computational principles of perception and experience-dependent plasticity in the brain. To this end, we use a multimodal approach including fMRI-guided electrophysiological recordings in rodents and non-human primates, and fMRI and ECoG in humans. The PhD student will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The PhD student will have access to a new imaging center with a dedicated 3T research scanner, electrophysiology, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan. The Department of Cognitive Neurobiology of Caspar Schwiedrzik at Ruhr-University Bochum is looking for an outstanding PhD student interested in studying the neural basis of mental flexibility. The project investigates neural mechanisms of high-dimensional visual category learning, utilizing functional magnetic resonance imaging (fMRI) in combination with computational modelling and behavioral testing in humans. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The PhD student’s project will focus on developing new category learning paradigms to investigate the neural basis of flexible multi-task learning in humans using fMRI. In addition, the PhD student will cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in non-human primates.

SeminarPsychology

Error Consistency between Humans and Machines as a function of presentation duration

Thomas Klein
Eberhard Karls Universität Tübingen
Jun 30, 2024

Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 2, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscience

Analyzing artificial neural networks to understand the brain

Grace Lindsay
NYU
Dec 15, 2022

In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.

SeminarNeuroscienceRecording

Bridging the gap between artificial models and cortical circuits

C. B. Currin
IST Austria
Nov 9, 2022

Artificial neural networks simplify complex biological circuits into tractable models for computational exploration and experimentation. However, the simplification of artificial models also undermines their applicability to real brain dynamics. Typical efforts to address this mismatch add complexity to increasingly unwieldy models. Here, we take a different approach; by reducing the complexity of a biological cortical culture, we aim to distil the essential factors of neuronal dynamics and plasticity. We leverage recent advances in growing neurons from human induced pluripotent stem cells (hiPSCs) to analyse ex vivo cortical cultures with only two distinct excitatory and inhibitory neuron populations. Over 6 weeks of development, we record from thousands of neurons using high-density microelectrode arrays (HD-MEAs) that allow access to individual neurons and the broader population dynamics. We compare these dynamics to two-population artificial networks of single-compartment neurons with random sparse connections and show that they produce similar dynamics. Specifically, our model captures the firing and bursting statistics of the cultures. Moreover, tightly integrating models and cultures allows us to evaluate the impact of changing architectures over weeks of development, with and without external stimuli. Broadly, the use of simplified cortical cultures enables us to use the repertoire of theoretical neuroscience techniques established over the past decades on artificial network models. Our approach of deriving neural networks from human cells also allows us, for the first time, to directly compare neural dynamics of disease and control. We found that cultures e.g. from epilepsy patients tended to have increasingly more avalanches of synchronous activity over weeks of development, in contrast to the control cultures. Next, we will test possible interventions, in silico and in vitro, in a drive for personalised approaches to medical care. This work starts bridging an important theoretical-experimental neuroscience gap for advancing our understanding of mammalian neuron dynamics.

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 8, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscience

Towards multi-system network models for cognitive neuroscience

Robert Guangyu Yang
MIT
Oct 13, 2022

Artificial neural networks can be useful for studying brain functions. In cognitive neuroscience, recurrent neural networks are often used to model cognitive functions. I will first offer my opinion on what is missing in the classical use of recurrent neural networks. Then I will discuss two lines of ongoing efforts in our group to move beyond the classical recurrent neural networks by studying multi-system neural networks (the talk will focus on two-system networks). These are networks that combine modules for several neural systems, such as vision, audition, prefrontal, hippocampal systems. I will showcase how multi-system networks can potentially be constrained by experimental data in fundamental ways and at scale.

SeminarNeuroscience

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll
Stanford University
Aug 24, 2022

Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.

SeminarNeuroscienceRecording

Network science and network medicine: New strategies for understanding and treating the biological basis of mental ill-health

Petra Vértes
Department of Psychiatry, University of Cambridge
Mar 14, 2022

The last twenty years have witnessed extraordinarily rapid progress in basic neuroscience, including breakthrough technologies such as optogenetics, and the collection of unprecedented amounts of neuroimaging, genetic and other data relevant to neuroscience and mental health. However, the translation of this progress into improved understanding of brain function and dysfunction has been comparatively slow. As a result, the development of therapeutics for mental health has stagnated too. One central challenge has been to extract meaning from these large, complex, multivariate datasets, which requires a shift towards systems-level mathematical and computational approaches. A second challenge has been reconciling different scales of investigation, from genes and molecules to cells, circuits, tissue, whole-brain, and ultimately behaviour. In this talk I will describe several strands of work using mathematical, statistical, and bioinformatic methods to bridge these gaps. Topics will include: using artificial neural networks to link the organization of large-scale brain connectivity to cognitive function; using multivariate statistical methods to link disease-related changes in brain networks to the underlying biological processes; and using network-based approaches to move from genetic insights towards drug discovey. Finally, I will discuss how simple organisms such as C. elegans can serve to inspire, test, and validate new methods and insights in networks neuroscience.

SeminarNeuroscienceRecording

Analogical Reasoning with Neuro-Symbolic AI

Hiroshi Honda
Keio University
Feb 23, 2022

Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.

SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 23, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscienceRecording

Norse: A library for gradient-based learning in Spiking Neural Networks

Catherine Schuman
Oak Ridge National Laboratory
Nov 2, 2021

Norse aims to exploit the advantages of bio-inspired neural components, which are sparse and event-driven - a fundamental difference from artificial neural networks. Norse expands PyTorch with primitives for bio-inspired neural components, bringing you two advantages: a modern and proven infrastructure based on PyTorch and deep learning-compatible spiking neural network components.

SeminarNeuroscienceRecording

Event-based Backpropagation for Exact Gradients in Spiking Neural Networks

Christian Pehle
Heidelberg University
Nov 2, 2021

Gradient-based optimization powered by the backpropagation algorithm proved to be the pivotal method in the training of non-spiking artificial neural networks. At the same time, spiking neural networks hold the promise for efficient processing of real-world sensory data by communicating using discrete events in continuous time. We derive the backpropagation algorithm for a recurrent network of spiking (leaky integrate-and-fire) neurons with hard thresholds and show that the backward dynamics amount to an event-based backpropagation of errors through time. Our derivation uses the jump conditions for partial derivatives at state discontinuities found by applying the implicit function theorem, allowing us to avoid approximations or substitutions. We find that the gradient exists and is finite almost everywhere in weight space, up to the null set where a membrane potential is precisely tangent to the threshold. Our presented algorithm, EventProp, computes the exact gradient with respect to a general loss function based on spike times and membrane potentials. Crucially, the algorithm allows for an event-based communication scheme in the backward phase, retaining the potential advantages of temporal sparsity afforded by spiking neural networks. We demonstrate the optimization of spiking networks using gradients computed via EventProp and the Yin-Yang and MNIST datasets with either a spike time-based or voltage-based loss function and report competitive performance. Our work supports the rigorous study of gradient-based optimization in spiking neural networks as well as the development of event-based neuromorphic architectures for the efficient training of spiking neural networks. While we consider the leaky integrate-and-fire model in this work, our methodology generalises to any neuron model defined as a hybrid dynamical system.

SeminarNeuroscienceRecording

Optimal initialization strategies for Deep Spiking Neural Networks

Julia Gygax
Friedrich Miescher Institute for Biomedical Research (FMI)
Nov 2, 2021

Recent advances in neuromorphic hardware and Surrogate Gradient (SG) learning highlight the potential of Spiking Neural Networks (SNNs) for energy-efficient signal processing and learning. Like in Artificial Neural Networks (ANNs), training performance in SNNs strongly depends on the initialization of synaptic and neuronal parameters. While there are established methods of initializing deep ANNs for high performance, effective strategies for optimal SNN initialization are lacking. Here, we address this gap and propose flexible data-dependent initialization strategies for SNNs.

SeminarNeuroscienceRecording

On the implicit bias of SGD in deep learning

Amir Globerson
Tel Aviv University
Oct 19, 2021

Tali's work emphasized the tradeoff between compression and information preservation. In this talk I will explore this theme in the context of deep learning. Artificial neural networks have recently revolutionized the field of machine learning. However, we still do not have sufficient theoretical understanding of how such models can be successfully learned. Two specific questions in this context are: how can neural nets be learned despite the non-convexity of the learning problem, and how can they generalize well despite often having more parameters than training data. I will describe our recent work showing that gradient-descent optimization indeed leads to 'simpler' models, where simplicity is captured by lower weight norm and in some cases clustering of weight vectors. We demonstrate this for several teacher and student architectures, including learning linear teachers with ReLU networks, learning boolean functions and learning convolutional pattern detection architectures.

SeminarNeuroscience

Towards a neurally mechanistic understanding of visual cognition

Kohitij Kar
Massachusetts Institute of Technology
Jun 13, 2021

I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!

SeminarNeuroscienceRecording

Artificial neural networks do not adequately mimic whatever is going on in the real brain

Danko Nikolić
evocenta GmbH
Apr 21, 2021

One may think that Deep Learning technology works in ways that are similar to the human brain. This is not really true. Our best AI technology still does not mimic the brain sufficiently well to be a match in intelligence. I will describe seven differences on how our minds work in ways diametrically opposite to those of Deep Learning technology.

SeminarNeuroscienceRecording

Do deep learning latent spaces resemble human brain representations?

Rufin VanRullen
Centre de Recherche Cerveau et Cognition (CERCO)
Mar 11, 2021

In recent years, artificial neural networks have demonstrated human-like or super-human performance in many tasks including image or speech recognition, natural language processing (NLP), playing Go, chess, poker and video-games. One remarkable feature of the resulting models is that they can develop very intuitive latent representations of their inputs. In these latent spaces, simple linear operations tend to give meaningful results, as in the well-known analogy QUEEN-WOMAN+MAN=KING. We postulate that human brain representations share essential properties with these deep learning latent spaces. To verify this, we test whether artificial latent spaces can serve as a good model for decoding brain activity. We report improvements over state-of-the-art performance for reconstructing seen and imagined face images from fMRI brain activation patterns, using the latent space of a GAN (Generative Adversarial Network) model coupled with a Variational AutoEncoder (VAE). With another GAN model (BigBiGAN), we can decode and reconstruct natural scenes of any category from the corresponding brain activity. Our results suggest that deep learning can produce high-level representations approaching those found in the human brain. Finally, I will discuss whether these deep learning latent spaces could be relevant to the study of consciousness.

SeminarNeuroscienceRecording

A function approximation perspective on neural representations

Cengiz Pehlevan
Harvard University
Dec 1, 2020

Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.

SeminarNeuroscience

A computational explanation for domain specificity in the human brain

Katharina Dobs
University Giessen
Nov 24, 2020

Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design strategy for brains, and what is the origin of domain specificity in the first place? In this talk, I will present recent work testing whether the segregation of face and object perception in human brains emerges naturally from an optimization for both tasks. We trained artificial neural networks on face and object recognition, and found that networks were able to perform both tasks well by spontaneously segregating them into distinct pathways. Critically, networks neither had prior knowledge nor any inductive bias about the tasks. Furthermore, networks optimized on tasks which apparently do not develop specialization in the human brain, such as food or cars, and object categorization showed less task segregation. These results suggest that functional segregation can spontaneously emerge without a task-specific bias, and that the domain-specific organization of the cortex may reflect a computational optimization for the real-world tasks humans solve.

SeminarNeuroscienceRecording

Back-propagation in spiking neural networks

Timothee Masquelier
Centre national de la recherche scientifique, CNRS | Toulouse
Aug 31, 2020

Back-propagation is a powerful supervised learning algorithm in artificial neural networks, because it solves the credit assignment problem (essentially: what should the hidden layers do?). This algorithm has led to the deep learning revolution. But unfortunately, back-propagation cannot be used directly in spiking neural networks (SNN). Indeed, it requires differentiable activation functions, whereas spikes are all-or-none events which cause discontinuities. Here we present two strategies to overcome this problem. The first one is to use a so-called 'surrogate gradient', that is to approximate the derivative of the threshold function with the derivative of a sigmoid. We will present some applications of this method for time series processing (audio, internet traffic, EEG). The second one concerns a specific class of SNNs, which process static inputs using latency coding with at most one spike per neuron. Using approximations, we derived a latency-based back-propagation rule for this sort of networks, called S4NN, and applied it to image classification.

SeminarNeuroscienceRecording

On temporal coding in spiking neural networks with alpha synaptic function

Iulia M. Comsa
Google Research Zürich, Switzerland
Aug 30, 2020

The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.

SeminarNeuroscienceRecording

Geometry of Neural Computation Unifies Working Memory and Planning

John D. Murray
Yale University School of Medicine
Jun 17, 2020

Cognitive tasks typically require the integration of working memory, contextual processing, and planning to be carried out in close coordination. However, these computations are typically studied within neuroscience as independent modular processes in the brain. In this talk I will present an alternative view, that neural representations of mappings between expected stimuli and contingent goal actions can unify working memory and planning computations. We term these stored maps contingency representations. We developed a "conditional delayed logic" task capable of disambiguating the types of representations used during performance of delay tasks. Human behaviour in this task is consistent with the contingency representation, and not with traditional sensory models of working memory. In task-optimized artificial recurrent neural network models, we investigated the representational geometry and dynamical circuit mechanisms supporting contingency-based computation, and show how contingency representation explains salient observations of neuronal tuning properties in prefrontal cortex. Finally, our theory generates novel and falsifiable predictions for single-unit and population neural recordings.

ePoster

Dendrites endow artificial neural networks with accurate, robust and parameter-efficient learning

Spyridon Chavlis, Panayiota Poirazi

Bernstein Conference 2024

ePoster

Integrating Biological and Artificial Neural Networks for Solving Non-Linear Problems

Katarina Vulić, Joël Küchler, Jona Schulz, Haotian Yao, Christian Valmaggia, Sean Weaver, Stephan Ihle, Jens Duru, Janos Vörös

Bernstein Conference 2024

ePoster

Pre-training artificial neural networks with spontaneous retinal activity improves image prediction

Lilly May, Alice Dauphin, Julijana Gjorgjieva

COSYNE 2023

ePoster

Reinforcement learning at multiple timescales in biological and artificial neural networks

Paul Masset, Pablo Tano, Athar Malik, HyungGoo Kim, Pol Bech, Alexandre Pouget, Naoshige Uchida

COSYNE 2023

ePoster

Inter-individual Variability in Primate Inferior Temporal Cortex Representations: Insights from Macaque Neural Responses and Artificial Neural Networks

Kohitij Kar, James DiCarlo

COSYNE 2025

ePoster

Mapping social perception to social behavior using artificial neural networks

Nate Dolensek, Doris Tsao, Shi Chen

COSYNE 2025

ePoster

Probing Motion-Form Interactions in the Macaque Inferior Temporal Cortex and Artificial Neural Networks for Complex Scene Understanding

Jean de Dieu Uwisengeyimana, Kohitij Kar

COSYNE 2025

ePoster

Review of applications of graph theory and network neuroscience in the development of artificial neural networks

Jan Bendyk

Neuromatch 5