← Back

Inductive Biases

Topic spotlight
TopicWorld Wide

Inductive Biases

Discover seminars, jobs, and research tagged with Inductive Biases across World Wide.
5 curated items2 Seminars2 ePosters1 Position
Updated 1 day ago
5 items · Inductive Biases
5 results
PositionMachine Learning

Stefan Mihalas

Allen Institute, University of Washington (UW)
Seattle, Wa
Dec 5, 2025

Biological systems learn differently than current machine learning systems, with generally higher sample efficiency but also strong inductive biases. The scientist will explore the effects which bio-realistic neurons, plasticity rules and architectures have on learning in artificial neural networks. This will be done by combining construction of artificial neural network with bio-inspired constraints.

SeminarNeuroscience

Learning through the eyes and ears of a child

Brenden Lake
NYU
Apr 20, 2023

Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

ePoster

Clustering Inductive Biases with Unrolled Networks

Jonathan Huml, Abiy Tasissa, Demba Ba

COSYNE 2023

ePoster

Meta-Learning the Inductive Biases of Simple Neural Circuits

Maria Yuffa

Neuromatch 5