World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.
Review the Privacy Policy for details about analytics processing.
Dr
Stanford University
Showing your local timezone
Schedule
Thursday, August 25, 2022
9:00 AM Australia/Sydney
Seminar location
No geocoded details are available for this content yet.
Format
Past Seminar
Recording
Not available
Host
Sydney Systems Neuroscience and Complexity SNAC
Seminar location
No geocoded details are available for this content yet.
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Laura Driscoll
Dr
Stanford University
Contact & Resources
neuro
neuro
The development of the iPS cell technology has revolutionized our ability to study development and diseases in defined in vitro cell culture systems. The talk will focus on Rett Syndrome and discuss t
neuro
Pluripotent cells, including embryonic stem (ES) and induced pluripotent stem (iPS) cells, are used to investigate the genetic and epigenetic underpinnings of human diseases such as Parkinson’s, Alzhe