← Back

Algorithms

Topic spotlight
TopicWorld Wide

algorithms

Discover seminars, jobs, and research tagged with algorithms across World Wide.
70 curated items58 Seminars9 Positions3 ePosters
Updated about 18 hours ago
70 items · algorithms
70 results
Position

Prof Saket Navlakha

Cold Spring Harbor Laboratory
Cold Spring Harbor, NY USA
Dec 5, 2025

We are looking for post-docs broadly interested in studying biological information processing from an algorithmic perspective. The goal is to discover new ideas for computation by studying problem-solving strategies used in nature, and to ground these ideas by fostering deep collaborations with experimental biologists. Most recently, we have been interested in neural circuit computation, but new areas are also welcome, including plant biology and genomics.

PositionComputer Science

N/A

HSE University
Moscow, Russia
Dec 5, 2025

The Faculty of Computer Science of HSE University invites applications for full-time, tenure-track positions of Assistant Professor in all areas of computer science including but not limited to artificial intelligence, machine learning, computer vision, programming language theory, software engineering, system programming, algorithms, computation complexity, distributed and parallel computation, bioinformatics, human-computer interaction, and robotics. The successful candidate is expected to conduct high-quality research publishable in reputable peer-reviewed journals with research support provided by the University.

PositionComputer Science

Nathalie Japkowicz

American University
American University
Dec 5, 2025

The Department of Computer Science in the College of Arts and Sciences at American University invites applications for a full-time, open-rank, tenure-line position beginning August 1, 2024. Applicants should have a PhD or an anticipated PhD completion by August 2024 in Computer Science or related fields. Depending on experience and qualification, the appointee to this position may be recommended for tenure at the time of hiring. Candidates can apply at the assistant, associate, or full professor level and we welcome applications from both academic and nonacademic organizations. We are looking for candidates who are excited at the prospect of joining a growing department where they will be able to make their mark. Preference will be given to candidates with a record of high-quality scholarship. For candidates applying at the associate or full professor level, a record of external funding is also expected. The committee will consider candidates engaged in high-quality research in any area of Computer Science related to Artificial Intelligence (E.g., Natural Language Processing, Machine Learning, Network Analysis, Information Visualization), Theoretical Computer Science (Computational Theory, Graph Theory, Algorithms), Cybersecurity, and other traditional areas of Computer Science (E.g., Software Engineering, Database Systems, Graphics, etc.). The University has areas of strategic focus for research in Data Science and Analytics, Health, Security, Social Equity, and Sustainability. Applicants from historically underrepresented minority and identity groups are strongly encouraged to apply. In addition to scholarship and teaching, responsibilities will include participation in department, school, and university service activities. Attention to Diversity, Equity and Inclusion (DEI) in all activities within the academic environment are expected.

Position

Dr. Robert Legenstein

Graz University of Technology
Graz University of Technology, Austria
Dec 5, 2025

The successful candidate will work on learning algorithms for spiking neural networks in the international consortium of the international project 'Scalable Learning Neuromorphics'. We will develop in this project learning algorithms for spiking neural networks for memristive hardware implementations. This project aims to develop scalable Spiking Neural Networks (SNNs) by leveraging the integration of 3D memristors, thereby overcoming limitations of conventional Artificial Neural Networks (ANNs). Positioned at the intersection of artificial intelligence and brain-inspired computing, the initiative focuses on innovative SNN training methods, optimizing recurrent connections, and designing dedicated hardware accelerators. These advancements will uniquely contribute to scalability and energy efficiency. The endeavor addresses key challenges in event-based processing and temporal coding, aiming for substantial performance gains in both software and hardware implementations of artificial intelligence systems. Expected research outputs include novel algorithms, optimization methods, and memristor-based hardware architectures, with broad applications and potential for technology transfer.

PositionComputer Science

Nathalie Japkowicz

American University
American University
Dec 5, 2025

The Department of Computer Science in the College of Arts and Sciences at American University invites applications for a full-time, open-rank, tenure-line position beginning August 1, 2024. Applicants should have a PhD or an anticipated PhD completion by August 2024 in Computer Science or related fields. Depending on experience and qualification, the appointee to this position may be recommended for tenure at the time of hiring. Candidates can apply at the assistant, associate, or full professor level and we welcome applications from both academic and nonacademic organizations. We are looking for candidates who are excited at the prospect of joining a growing department where they will be able to make their mark. Preference will be given to candidates with a record of high-quality scholarship. For candidates applying at the associate or full professor level, a record of external funding is also expected. The committee will consider candidates engaged in high-quality research in any area of Computer Science related to Artificial Intelligence (E.g., Natural Language Processing, Machine Learning, Network Analysis, Information Visualization), Theoretical Computer Science (Computational Theory, Graph Theory, Algorithms), Cybersecurity, and other traditional areas of Computer Science (E.g., Software Engineering, Database Systems, Graphics, etc.). The University has areas of strategic focus for research in Data Science and Analytics, Health, Security, Social Equity, and Sustainability. Applicants from historically underrepresented minority and identity groups are strongly encouraged to apply. In addition to scholarship and teaching, responsibilities will include participation in department, school, and university service activities. Attention to Diversity, Equity and Inclusion (DEI) in all activities within the academic environment are expected.

Position

Prof. Massimiliano Pontil

IIT
IIT
Dec 5, 2025

We are seeking a talented and motivated Postdoc to join the Computational Statistics and Machine Learning Research Units at IIT, led by Prof. Massimiliano Pontil. The successful candidate will be engaged in designing novel learning algorithms for numerical simulations of physical systems, with a focus on machine learning for dynamical systems. CSML’s core focus is on ML theory and algorithms, while significant multidisciplinary interactions with other IIT groups apply our research outputs in areas ranging from Atomistic Simulations to Neuroscience and Robotics. We have also recently started international collaboration on Climate Modelling. The group hosts applied mathematicians, computer scientists, physicists, and computer engineers, working together on theory, algorithms and applications. ML techniques, coupled with numerical simulations of physical systems have the potential to revolutionize the way in which science is conducted. Meeting this challenge requires a multi-disciplinary approach in which experts from different disciplines work together.

PositionMachine Learning

Georgios Exarchakis

University of Bath
University of Bath
Dec 5, 2025

The University of Bath invites applications for a fully-funded PhD position in Machine Learning, as part of the prestigious URSA competition. This project focuses on developing interpretable machine learning methods for high-dimensional data, with an emphasis on recognizing symmetries and incorporating them into efficient, flexible algorithms. This PhD position offers the opportunity to work within a leading research environment, using state-of-the-art tools such as TensorFlow, PyTorch, and Scikit-Learn. The research outcomes have potential applications in diverse fields, and students are encouraged to bring creative and interdisciplinary approaches to problem-solving.

Position

Jörn Anemüller

Department of Medical Physics and Acoustics, University of Oldenburg
Oldenburg University
Dec 5, 2025

We have are looking to fill a fully funded 3-year Ph.D. student position in the field of deep learning-based signal processing algorithms for speech enhancement and computational audition. The position is funded by the German research council (DFG) within the Collaborative Research Centre SFB 1330 “Hearing Acoustics” at the Department of Medical Physics and Acoustics, University of Oldenburg. Within project B3 of the research centre, the Computational Audition Group develops machine learning algorithms for signal processing of speech and audio data.

SeminarNeuroscienceRecording

Time perception in film viewing as a function of film editing

Lydia Liapi
Panteion University
Mar 26, 2024

Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.

SeminarNeuroscience

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine

Nelson Spruston
Janelia, Ashburn, USA
Mar 5, 2024

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

SeminarNeuroscience

Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory

James Whittington
Stanford University / University of Oxford
Feb 13, 2024

Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM

SeminarNeuroscience

Richly structured reward predictions in dopaminergic learning circuits

Angela J. Langdon
National Institute of Mental Health at National Institutes of Health (NIH)
May 16, 2023

Theories from reinforcement learning have been highly influential for interpreting neural activity in the biological circuits critical for animal and human learning. Central among these is the identification of phasic activity in dopamine neurons as a reward prediction error signal that drives learning in basal ganglia and prefrontal circuits. However, recent findings suggest that dopaminergic prediction error signals have access to complex, structured reward predictions and are sensitive to more properties of outcomes than learning theories with simple scalar value predictions might suggest. Here, I will present recent work in which we probed the identity-specific structure of reward prediction errors in an odor-guided choice task and found evidence for multiple predictive “threads” that segregate reward predictions, and reward prediction errors, according to the specific sensory features of anticipated outcomes. Our results point to an expanded class of neural reinforcement learning algorithms in which biological agents learn rich associative structure from their environment and leverage it to build reward predictions that include information about the specific, and perhaps idiosyncratic, features of available outcomes, using these to guide behavior in even quite simple reward learning tasks.

SeminarPsychology

How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience

Nicolas Langer
University of Zurich
May 16, 2023

This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.

SeminarNeuroscienceRecording

Central place foraging: how insects anchor spatial information

Barbara Webb
University of Edinburgh
Mar 13, 2023

Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.

SeminarNeuroscience

Searching for the algorithms of iterative motor learning involving the cerebellum

Boris Barbour
Institut de Biologie de l’Ecole Normale Supérieure (IBENS), Paris, France
Jan 10, 2023
SeminarNeuroscience

Maths, AI and Neuroscience Meeting Stockholm

Roshan Cools, Alain Destexhe, Upi Bhalla, Vijay Balasubramnian, Dinos Meletis, Richard Naud
Dec 14, 2022

To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.

SeminarNeuroscience

Mapping learning and decision-making algorithms onto brain circuitry

Ilana Witten
Princeton
Nov 17, 2022

In the first half of my talk, I will discuss our recent work on the midbrain dopamine system. The hypothesis that midbrain dopamine neurons broadcast an error signal for the prediction of reward is among the great successes of computational neuroscience. However, our recent results contradict a core aspect of this theory: that the neurons uniformly convey a scalar, global signal. I will review this work, as well as our new efforts to update models of the neural basis of reinforcement learning with our data. In the second half of my talk, I will discuss our recent findings of state-dependent decision-making mechanisms in the striatum.

SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 9, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscienceRecording

A multi-level account of hippocampal function in concept learning from behavior to neurons

Rob Mok
University of Cambridge
Nov 1, 2022

A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.

SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarOpen SourceRecording

Computational Imaging: Augmenting Optics with Algorithms for Biomedical Microscopy and Neural Imaging

Lei Tian
Department of Electrical and Computer Engineering, Boston University
Aug 21, 2022

Computational imaging seeks to achieve novel capabilities and overcome conventional limitations by combining optics and algorithms. In this seminar, I will discuss two computational imaging technologies developed in Boston University Computational Imaging Systems lab, including Intensity Diffraction Tomography and Computational Miniature Mesoscope. In our intensity diffraction tomography system, we demonstrate 3D quantitative phase imaging on a simple LED array microscope. We develop both single-scattering and multiple-scattering models to image complex biological samples. In our Computational Miniature Mesoscope, we demonstrate single-shot 3D high-resolution fluorescence imaging across a wide field-of-view in a miniaturized platform. We develop methods to characterize 3D spatially varying aberrations and physical simulator-based deep learning strategies to achieve fast and accurate reconstructions. Broadly, I will discuss how synergies between novel optical instrumentation, physical modeling, and model- and learning-based computational algorithms can push the limits in biomedical microscopy and neural imaging.

SeminarNeuroscienceRecording

Probabilistic computation in natural vision

Ruben Coen-Cagli
Albert Einstein College of Medicine
Mar 29, 2022

A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.

SeminarNeuroscience

Maths, AI and Neuroscience meeting

Tim Vogels, Mickey London, Anita Disney, Yonina Eldar, Partha Mitra, Yi Ma
Dec 12, 2021

To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.

SeminarNeuroscienceRecording

NMC4 Short Talk: Neural Representation: Bridging Neuroscience and Philosophy

Andrew Richmond (he/him)
Columbia University
Dec 1, 2021

We understand the brain in representational terms. E.g., we understand spatial navigation by appealing to the spatial properties that hippocampal cells represent, and the operations hippocampal circuits perform on those representations (Moser et al., 2008). Philosophers have been concerned with the nature of representation, and recently neuroscientists entered the debate, focusing specifically on neural representations. (Baker & Lansdell, n.d.; Egan, 2019; Piccinini & Shagrir, 2014; Poldrack, 2020; Shagrir, 2001). We want to know what representations are, how to discover them in the brain, and why they matter so much for our understanding of the brain. Those questions are framed in a traditional philosophical way: we start with explanations that use representational notions, and to more deeply understand those explanations we ask, what are representations — what is the definition of representation? What is it for some bit of neural activity to be a representation? I argue that there is an alternative, and much more fruitful, approach. Rather than asking what representations are, we should ask what the use of representational *notions* allows us to do in neuroscience — what thinking in representational terms helps scientists do or explain. I argue that this framing offers more fruitful ground for interdisciplinary collaboration by distinguishing the philosophical concerns that have a place in neuroscience from those that don’t (namely the definitional or metaphysical questions about representation). And I argue for a particular view of representational notions: they allow us to impose the structure of one domain onto another as a model of its causal structue. So, e.g., thinking about the hippocampus as representing spatial properties is a way of taking structures in those spatial properties, and projecting those structures (and algorithms that would implement them) them onto the brain as models of its causal structure.

SeminarNeuroscienceRecording

NMC4 Short Talk: What can deep reinforcement learning tell us about human motor learning and vice-versa ?

Michele Garibbo
University of Bristol
Nov 30, 2021

In the deep reinforcement learning (RL) community, motor control problems are usually approached from a reward-based learning perspective. However, humans are often believed to learn motor control through directed error-based learning. Within this learning setting, the control system is assumed to have access to exact error signals and their gradients with respect to the control signal. This is unlike reward-based learning, in which errors are assumed to be unsigned, encoding relative successes and failures. Here, we try to understand the relation between these two approaches, reward- and error- based learning, and ballistic arm reaches. To do so, we test canonical (deep) RL algorithms on a well-known sensorimotor perturbation in neuroscience: mirror-reversal of visual feedback during arm reaching. This test leads us to propose a potentially novel RL algorithm, denoted as model-based deterministic policy gradient (MB-DPG). This RL algorithm draws inspiration from error-based learning to qualitatively reproduce human reaching performance under mirror-reversal. Next, we show MB-DPG outperforms the other canonical (deep) RL algorithms on a single- and a multi- target ballistic reaching task, based on a biomechanical model of the human arm. Finally, we propose MB-DPG may provide an efficient computational framework to help explain error-based learning in neuroscience.

SeminarNeuroscienceRecording

Efficient GPU training of SNNs using approximate RTRL

James Knight
University of Sussex
Nov 2, 2021

Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].

SeminarNeuroscienceRecording

Rastermap: Extracting structure from high dimensional neural data

Carsen Stringer
HHMI, Janelia Research Campus
Oct 26, 2021

Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.

SeminarNeuroscienceRecording

Do you hear what I see: Auditory motion processing in blind individuals

Ione Fine
University of Washington
Oct 6, 2021

Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.

SeminarNeuroscience

“Wasn’t there food around here?”: An Agent-based Model for Local Search in Drosophila

Amir Behbahani
California Institute of Technology
Sep 19, 2021

The ability to keep track of one’s location in space is a critical behavior for animals navigating to and from a salient location, and its computational basis is now beginning to be unraveled. Here, we tracked flies in a ring-shaped channel as they executed bouts of search triggered by optogenetic activation of sugar receptors. Unlike experiments in open field arenas, which produce highly tortuous search trajectories, our geometrically constrained paradigm enabled us to monitor flies’ decisions to move toward or away from the fictive food. Our results suggest that flies use path integration to remember the location of a food site even after it has disappeared, and flies can remember the location of a former food site even after walking around the arena one or more times. To determine the behavioral algorithms underlying Drosophila search, we developed multiple state transition models and found that flies likely accomplish path integration by combining odometry and compass navigation to keep track of their position relative to the fictive food. Our results indicate that whereas flies re-zero their path integrator at food when only one feeding site is present, they adjust their path integrator to a central location between sites when experiencing food at two or more locations. Together, this work provides a simple experimental paradigm and theoretical framework to advance investigations of the neural basis of path integration.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 26, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarNeuroscienceRecording

A role for dopamine in value-free learning

Luke Coddington
Dudman lab, HHMI Janelia
Jul 13, 2021

Recent success in training artificial agents and robots derives from a combination of direct learning of behavioral policies and indirect learning via value functions. Policy learning and value learning employ distinct algorithms that depend upon evaluation of errors in performance and reward prediction errors, respectively. In mammals, behavioral learning and the role of mesolimbic dopamine signaling have been extensively evaluated with respect to reward prediction errors; but there has been little consideration of how direct policy learning might inform our understanding. I’ll discuss our recent work on classical conditioning in naïve mice (https://www.biorxiv.org/content/10.1101/2021.05.31.446464v1) that provides multiple lines of evidence that phasic dopamine signaling regulates policy learning from performance errors in addition to its well-known roles in value learning. This work points towards new opportunities for unraveling the mechanisms of basal ganglia control over behavior under both adaptive and maladaptive learning conditions.

SeminarNeuroscienceRecording

Zero-shot visual reasoning with probabilistic analogical mapping

Taylor Webb
UCLA
Jun 30, 2021

There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.

SeminarNeuroscience

Bridging brain and cognition: A multilayer network analysis of brain structural covariance and general intelligence in a developmental sample of struggling learners

Ivan Simpson-Kent
University of Cambridge, MRC CBU
Jun 1, 2021

Network analytic methods that are ubiquitous in other areas, such as systems neuroscience, have recently been used to test network theories in psychology, including intelligence research. The network or mutualism theory of intelligence proposes that the statistical associations among cognitive abilities (e.g. specific abilities such as vocabulary or memory) stem from causal relations among them throughout development. In this study, we used network models (specifically LASSO) of cognitive abilities and brain structural covariance (grey and white matter) to simultaneously model brain-behavior relationships essential for general intelligence in a large (behavioral, N=805; cortical volume, N=246; fractional anisotropy, N=165), developmental (ages 5-18) cohort of struggling learners (CALM). We found that mostly positive, small partial correlations pervade both our cognitive and neural networks. Moreover, calculating node centrality (absolute strength and bridge strength) and using two separate community detection algorithms (Walktrap and Clique Percolation), we found convergent evidence that subsets of both cognitive and neural nodes play an intermediary role between brain and behavior. We discuss implications and possible avenues for future studies.

SeminarNeuroscienceRecording

A theory for Hebbian learning in recurrent E-I networks

Samuel Eckmann
Gjorgjieva lab, Max Planck Institute for Brain Research, Frankfurt, Germany
May 19, 2021

The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons with a supralinear input-output relation. It can explain cortical computations such as response normalization and inhibitory stabilization. However, the network's connectivity is designed by hand, based on experimental measurements. How the recurrent synaptic weights can be learned from the sensory input statistics in a biologically plausible way is unknown. Earlier theoretical work on plasticity focused on single neurons and the balance of excitation and inhibition but did not consider the simultaneous plasticity of recurrent synapses and the formation of receptive fields. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic, and E neurons self-stabilize by recruiting co-tuned inhibition. Motivated by experimental results, we employ a local Hebbian plasticity rule with multiplicative normalization for E and I synapses. We develop a theoretical framework that explains how plasticity enables inhibition balanced excitatory receptive fields that match experimental results. We show analytically that sufficiently strong inhibition allows neurons' receptive fields to decorrelate and distribute themselves across the stimulus space. For strong recurrent excitation, the network becomes stabilized by inhibition, which prevents unconstrained self-excitation. In this regime, external inputs integrate sublinearly. As in the Stabilized Supralinear Network, this results in response normalization and winner-takes-all dynamics: when two competing stimuli are presented, the network response is dominated by the stronger stimulus while the weaker stimulus is suppressed. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.

SeminarNeuroscienceRecording

Choice engineering and the modeling of operant learning

Yonatan Loewenstein
The Hebrew University
Apr 6, 2021

Organisms modify their behavior in response to its consequences, a phenomenon referred to as operant learning. Contemporary modeling of this learning behavior is based on reinforcement learning algorithms. I will discuss some of the challenges that these models face, and proposed a new approach to model-selection that is based on testing their ability to engineer behavior. Finally, I will present the results of The Choice Engineering Competition – an academic competition that compared the efficacies of qualitative and quantitative models of operant learning in shaping behavior.

SeminarPhysics of LifeRecording

Anatomical decision-making by cellular collectives: bioelectrical pattern memories, regeneration, and synthetic living organisms

Michael Levin
Tufts University
Mar 25, 2021

A key question for basic biology and regenerative medicine concerns the way in which evolution exploits physics toward adaptive form and function. While genomes specify the molecular hardware of cells, what algorithms enable cellular collectives to reliably build specific, complex, target morphologies? Our lab studies the way in which all cells, not just neurons, communicate as electrical networks that enable scaling of single-cell properties into collective intelligences that solve problems in anatomical feature space. By learning to read, interpret, and write bioelectrical information in vivo, we have identified some novel controls of growth and form that enable incredible plasticity and robustness in anatomical homeostasis. In this talk, I will describe the fundamental knowledge gaps with respect to anatomical plasticity and pattern control beyond emergence, and discuss our efforts to understand large-scale morphological control circuits. I will show examples in embryogenesis, regeneration, cancer, and synthetic living machines. I will also discuss the implications of this work for not only regenerative medicine, but also for fundamental understanding of the origin of bodyplans and the relationship between genomes and functional anatomy.

SeminarNeuroscienceRecording

Data-driven Artificial Social Intelligence: From Social Appropriateness to Fairness

Hatice Gunes
Department of Computer Science and Technology, University of Cambridge
Mar 15, 2021

Designing artificially intelligent systems and interfaces with socio-emotional skills is a challenging task. Progress in industry and developments in academia provide us a positive outlook, however, the artificial social and emotional intelligence of the current technology is still limited. My lab’s research has been pushing the state of the art in a wide spectrum of research topics in this area, including the design and creation of new datasets; novel feature representations and learning algorithms for sensing and understanding human nonverbal behaviours in solo, dyadic and group settings; designing longitudinal human-robot interaction studies for wellbeing; and investigating how to mitigate the bias that creeps into these systems. In this talk, I will present some of my research team’s explorations in these areas including social appropriateness of robot actions, virtual reality based cognitive training with affective adaptation, and bias and fairness in data-driven emotionally intelligent systems.

SeminarNeuroscience

Exploration beyond bandits

Eric Schulz
Max Planck
Jan 26, 2021

Machine learning researchers frequently focus on human-level performance, in particular in games. However, in these applications human (or human-level) behavior is commonly reduced to a simple dot on a performance graph. Cognitive science, in particular theories of learning and decision making, could hold the key to unlock what is behind this dot, thereby gaining further insights into human cognition and the design principles of intelligent algorithms. However, cognitive experiments commonly focus on relatively simple paradigms such as restricted multi-armed bandit tasks. In this talk, I will argue that cognitive science can turn its lens to more complex scenarios to study exploration in real-world domains and online games. I will show in one large data set of online food delivery orders and across many online games how current cognitive theories of learning and exploration can describe human behavior in the wild, but also how these tasks demand us to expand our theoretical toolkit to describe a rich repertoire of real-world behaviors such as empowerment and fun.

SeminarNeuroscienceRecording

Machine Learning as a tool for positive impact : case studies from climate change

Alexandra (Sasha) Luccioni
University of Montreal and Mila (Quebec Institute for Learning Algorithms)
Dec 9, 2020

Climate change is one of our generation's greatest challenges, with increasingly severe consequences on global ecosystems and populations. Machine Learning has the potential to address many important challenges in climate change, from both mitigation (reducing its extent) and adaptation (preparing for unavoidable consequences) aspects. To present the extent of these opportunities, I will describe some of the projects that I am involved in, spanning from generative model to computer vision and natural language processing. There are many opportunities for fundamental innovation in this field, advancing the state-of-the-art in Machine Learning while ensuring that this fundamental progress translates into positive real-world impact.

SeminarNeuroscienceRecording

An inference perspective on meta-learning

Kate Rakelly
University of California Berkeley
Nov 25, 2020

While meta-learning algorithms are often viewed as algorithms that learn to learn, an alternative viewpoint frames meta-learning as inferring a hidden task variable from experience consisting of observations and rewards. From this perspective, learning to learn is learning to infer. This viewpoint can be useful in solving problems in meta-RL, which I’ll demonstrate through two examples: (1) enabling off-policy meta-learning, and (2) performing efficient meta-RL from image observations. I’ll also discuss how this perspective leads to an algorithm for few-shot image segmentation.

SeminarNeuroscienceRecording

An Algorithmic Barrier to Neural Circuit Understanding

Venkat Ramaswamy
Birla Institute of Technology & Science
Oct 1, 2020

Neuroscience is witnessing extraordinary progress in experimental techniques, especially at the neural circuit level. These advances are largely aimed at enabling us to understand precisely how neural circuit computations mechanistically cause behavior. Establishing this type of causal understanding will require multiple perturbational (e.g optogenetic) experiments. It has been unclear exactly how many such experiments are needed and how this number scales with the size of the nervous system in question. Here, using techniques from Theoretical Computer Science, we prove that establishing the most extensive notions of understanding need exponentially-many experiments in the number of neurons, in many cases, unless a widely-posited hypothesis about computation is false (i.e. unless P = NP). Furthermore, using data and estimates, we demonstrate that the feasible experimental regime is typically one where the number of experiments performable scales sub-linearly in the number of neurons in the nervous system. This remarkable gulf between the worst-case and the feasible suggests an algorithmic barrier to such an understanding. Determining which notions of understanding are algorithmically tractable to establish in what contexts, thus, becomes an important new direction for investigation. TL; DR: Non-existence of tractable algorithms for neural circuit interrogation could pose a barrier to comprehensively understanding how neural circuits cause behavior. Preprint: https://biorxiv.org/content/10.1101/639724v1/…

SeminarNeuroscience

Workshop on "Spiking neural networks as universal function approximators: Learning algorithms and applications

Sander Bohte, Iulia M. Comsa, Franz Scherr, Emre Neftci, Timothee Masquelier, Claudia Clopath, Richard Naud, Julian Goeltz
CWI, Google, TUG, UC Irvine, CNRS Toulouse, Imperial College, U Ottawa, Uni Bern
Aug 30, 2020

This is a two-day workshop. Sign up and see titles and abstracts on website.

SeminarNeuroscience

Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus

Andrea Navas-Olive
Instituto Cajal CSIC
Jul 18, 2020

The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.

SeminarNeuroscience

Using Nengo and the Neural Engineering Framework to Represent Time and Space

Terry Stewart
University of Waterloo and National Research Center Canada
Jul 14, 2020

The Neural Engineering Framework (and the associated software tool Nengo) provide a general method for converting algorithms into neural networks with an adjustable level of biological plausibility. I will give an introduction to this approach, and then focus on recent developments that have shown new insights into how brains represent time and space. This will start with the underlying mathematical formulation of ideal methods for representing continuous time and continuous space, then show how implementing these in neural networks can improve Machine Learning tasks, and finally show how the resulting systems compare to temporal and spatial representations in biological brains.

SeminarNeuroscienceRecording

Untangling the web of behaviours used to produce spider orb webs

Andrew Gordus
John Hopkins University
Jul 7, 2020

Many innate behaviours are the result of multiple sensorimotor programs that are dynamically coordinated to produce higher-order behaviours such as courtship or architecture construction. Extendend phenotypes such as architecture are especially useful for ethological study because the structure itself is a physical record of behavioural intent. A particularly elegant and easily quantifiable structure is the spider orb-web. The geometric symmetry and regularity of these webs have long generated interest in their behavioural origin. However, quantitative analyses of this behaviour have been sparse due to the difficulty of recording web-making in real-time. To address this, we have developed a novel assay enabling real-time, high-resolution tracking of limb movements and web structure produced by the hackled orb-weaver Uloborus diversus. With its small brain size of approximately 100,000 neurons, the spider U. diversus offers a tractable model organism for the study of complex behaviours. Using deep learning frameworks for limb tracking, and unsupervised behavioural clustering methods, we have developed an atlas of stereotyped movement motifs and are investigating the behavioural state transitions of which the geometry of the web is an emergent property. In addition to tracking limb movements, we have developed algorithms to track the web’s dynamic graph structure. We aim to model the relationship between the spider’s sensory experience on the web and its motor decisions, thereby identifying the sensory and internal states contributing to this sensorimotor transformation. Parallel efforts in our group are establishing 2-photon in vivo calcium imaging protocols in this spider, eventually facilitating a search for neural correlates underlying the internal and sensory state variables identified by our behavioural models. In addition, we have assembled a genome, and are developing genetic perturbation methods to investigate the genetic underpinnings of orb-weaving behaviour. Together, we aim to understand how complex innate behaviours are coordinated by underlying neuronal and genetic mechanisms.

SeminarNeuroscience

The ecology of collective behaviour

Deborah Gordon
Stanford University
May 26, 2020

Collective behaviour operates without central control, through interactions among individuals. The collective behaviour of ant colonies is based on simple olfactory interactions. Ant species differ enormously in the algorithms that regulate collective behaviour, reflecting diversity in ecology. I will contrast two species in very different ecological situations. Harvester ant colonies in the desert, where water is scarce but conditions are stable, regulate foraging to conserve water. Response to positive feedback from olfactory interactions depends on the risk of water loss, mediated by dopamine neurophysiology. For arboreal turtle ants in the tropical forest, life is easy but unpredictable, and a highly modular system uses negative feedback to sustain activity. In all natural systems, from ant colonies to brains, collective behaviour evolves in relation with changing conditions. Similar dynamics in environmental conditions may lead to the evolution of similar processes to regulate collective behaviour.

SeminarNeuroscience

Algorithms and circuits for olfactory navigation in walking Drosophila

Katherine Nagel
New York University
May 5, 2020

Olfactory navigation provides a tractable model for studying the circuit basis of sensori-motor transformations and goal-directed behaviour. Macroscopic organisms typically navigate in odor plumes that provide a noisy and uncertain signal about the location of an odor source. Work in many species has suggested that animals accomplish this task by combining temporal processing of dynamic odor information with an estimate of wind direction. Our lab has been using adult walking Drosophila to understand both the computational algorithms and the neural circuits that support navigation in a plume of attractive food odor. We developed a high-throughput paradigm to study behavioural responses to temporally-controlled odor and wind stimuli. Using this paradigm we found that flies respond to a food odor (apple cider vinegar) with two behaviours: during the odor they run upwind, while after odor loss they perform a local search. A simple computational model based one these two responses is sufficient to replicate many aspects of fly behaviour in a natural turbulent plume. In on-going work, we are seeking to identify the neural circuits and biophysical mechanisms that perform the computations delineated by our model. Using electrophysiology, we have identified mechanosensory neurons that compute wind direction from movements of the two antennae and central mechanosensory neurons that encode wind direction are are involved in generating a stable downwind orientation. Using optogenetic activation, we have traced olfactory circuits capable of evoking upwind orientation and offset search from the periphery, through the mushroom body and lateral horn, to the central complex. Finally, we have used optogenetic activation, in combination with molecular manipulation of specific synapses, to localize temporal computations performed on the odor signal to olfactory transduction and transmission at specific synapses. Our work illustrates how the tools available in fruit fly can be applied to dissect the mechanisms underlying a complex goal-directed behaviour.

SeminarNeuroscienceRecording

Decoding of Chemical Information from Populations of Olfactory Neurons

Pedro Herrero-Vidal
New York University
May 5, 2020

Information is represented in the brain by the coordinated activity of populations of neurons. Recent large-scale neural recording methods in combination with machine learning algorithms are helping understand how sensory processing and cognition emerge from neural population activity. This talk will explore the most popular machine learning methods used to gather meaningful low-dimensional representations from higher-dimensional neural recordings. To illustrate the potential of these approaches, Pedro will present his research in which chemical information is decoded from the olfactory system of the mouse for technological applications. Pedro and co-researchers have successfully extracted odor identity and concentration from olfactory receptor neuron low-dimensional activity trajectories. They have further developed a novel method to identify a shared latent space that allowed decoding of odor information across animals.

ePoster

Evolutionary algorithms support recurrent plasticity in spiking neural network models of neocortical task learning

Ivyer Qu, Huaze Liu, Jiayue Li, Yuqing Zhu

Bernstein Conference 2024

ePoster

Identifying cortical learning algorithms using Brain-Machine Interfaces

Sofia Pereira da Silva, Denis Alevi, Friedrich Schuessler, Henning Sprekeler

Bernstein Conference 2024

ePoster

Utilizing network-based algorithms for drug repurposing through a meta-analysis of East Asian genome-wide association studies in depression

Ping Lin Tsai, Hui Hua Chang

FENS Forum 2024