← Back

Neural Phenomena

Topic spotlight
TopicWorld Wide

neural phenomena

Discover seminars, jobs, and research tagged with neural phenomena across World Wide.
2 curated items2 Seminars
Updated about 3 years ago
2 items · neural phenomena
2 results
SeminarNeuroscienceRecording

No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit

Rylan Schaeffer
Fiete lab, MIT
Nov 1, 2022

Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they shed light on fundamental functions being optimized or make novel predictions about neural phenomena. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We rigorously examine the claims of deep learning models of grid cells using large-scale hyperparameter sweeps and theory-driven experimentation, and demonstrate that the results of such models are more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience.

SeminarNeuroscienceRecording

The butterfly strikes back: neurons doing 'network' computation

Upinder Singh Bhalla
National Centre for Biological Sciences of the Tata Institute of Fundamental Research.
May 28, 2020

We live in the age of the network: Internet social neural ecosystems. This has become one of the main metaphors for how we think about complex systems. This view also dominates the account of brain function. The role of neuronsdescribed by Cajal as the "butterflies of the soul" has become diminished to leaky integrate-and-fire point objects in many models of neural network computation. It is perhaps not surprising that networkexplanations of neural phenomena use neurons as elementary particles andascribe all their wonderful capabilities to their interactions in a network. In the network view the Connectome defines the brain and the butterflies have no role. In this talk I'd like to reclaim some key computations from the networkand return them to their rightful place at the cellular and subcellular level. I'll start with a provocative look at potential computational capacity ofdifferent kinds of brain computation: network vs. subcellular. I'll then consider different levels of pattern and sequence computationwith a glimpse of the efficiency of the subcellular solutions. Finally I propose that there is a suggestive mapping between entire nodesof deep networks to individual neurons. This in my view is how we can walk around with 1.3 litres and 20 watts of installed computational capacity still doing far more than giant AI server farms.