TopicNeuro

pattern recognition

6 Seminars2 ePosters

Latest

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 9, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscienceRecording

Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features

John O'Doherty
California Institute of Technology
Nov 12, 2021

It is an open question whether preferences for visual art can be lawfully predicted from the basic constituent elements of a visual image. Here, we developed and tested a computational framework to investigate how aesthetic values are formed. We show that it is possible to explain human preferences for a visual art piece based on a mixture of low- and high-level features of the image. Subjective value ratings could be predicted not only within but also across individuals, using a regression model with a common set of interpretable features. We also show that the features predicting aesthetic preference can emerge hierarchically within a deep convolutional neural network trained only for object recognition. Our findings suggest that human preferences for art can be explained at least in part as a systematic integration over the underlying visual features of an image.

SeminarNeuroscience

Imaging memory consolidation in wakefulness and sleep

Monika Schönauer
Albert-Ludwigs-Univery of Freiburg
Jun 17, 2021

New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.

SeminarNeuroscienceRecording

MRI pattern recognition in leukodystrophies

Nicole Wolf
Emma Children’s Hospital, Amsterdam University Medical Centre, the Netherlands
Jun 8, 2021
SeminarNeuroscience

Machine reasoning in histopathologic image analysis

Phedias Diamandis
University of Toronto
Jul 9, 2020

Deep learning is an emerging computational approach inspired by the human brain’s neural connectivity that has transformed machine-based image analysis. By using histopathology as a model of an expert-level pattern recognition exercise, we explore the ability for humans to teach machines to learn and mimic image-recognition and decision making. Moreover, these models also allow exploration into the ability for computers to independently learn salient histological patterns and complex ontological relationships that parallel biological and expert knowledge without the need for explicit direction or supervision. Deciphering the overlap between human and unsupervised machine reasoning may aid in eliminating biases and improving automation and accountability for artificial intelligence-assisted vision tasks and decision-making. Aleksandar Ivanov Title:

ePosterNeuroscience

Temporal pattern recognition in retinal ganglion cells is mediated by dynamical inhibitory synapses

Simone Ebert, Thomas Buffet, Semihchan Sermat, Olivier Marre, Bruno Cessac

COSYNE 2023

ePosterNeuroscience

Robustness and evolvability in a model of a pattern recognition network

Daesung Cho, Jan Clemens

FENS Forum 2024

pattern recognition coverage

8 items

Seminar6
ePoster2
Domain spotlight

Explore how pattern recognition research is advancing inside Neuro.

Visit domain