← Back

Diagrams

Topic spotlight
TopicWorld Wide

diagrams

Discover seminars, jobs, and research tagged with diagrams across World Wide.
4 curated items4 Seminars
Updated about 4 years ago
4 items · diagrams
4 results
SeminarNeuroscienceRecording

Spatial alignment supports visual comparisons

Nina Simms
Northwestern University
Dec 1, 2021

Visual comparisons are ubiquitous, and they can also be an important source for learning (e.g., Gentner et al., 2016; Kok et al., 2013). In science, technology, engineering, and math (STEM), key information is often conveyed through figures, graphs, and diagrams (Mayer, 1993). Comparing within and across visuals is critical for gleaning insight into the underlying concepts, structures, and processes that they represent. This talk addresses how people make visual comparisons and how visual comparisons can be best supported to improve learning. In particular, the talk will present a series of studies exploring the Spatial Alignment Principle (Matlen et al., 2020), derived from Structure-Mapping Theory (Gentner, 1983). Structure-mapping theory proposes that comparisons involve a process of finding correspondences between elements based on structured relationships. The Spatial Alignment Principle suggests that spatially arranging compared figures directly – to support correct correspondences and minimize interference from incorrect correspondences – will facilitate visual comparisons. We find that direct placement can facilitate visual comparison in educationally relevant stimuli, and that it may be especially important when figures are less familiar. We also present complementary evidence illustrating the preponderance of visual comparisons in 7th grade science textbooks.

SeminarNeuroscience

A journey through connectomics: from manual tracing to the first fully automated basal ganglia connectomes

Joergen Kornfeld
Massachusetts Institute of Technology
Nov 16, 2020

The "mind of the worm", the first electron microscopy-based connectome of C. elegans, was an early sign of where connectomics is headed, followed by a long time of little progress in a field held back by the immense manual effort required for data acquisition and analysis. This changed over the last few years with several technological breakthroughs, which allowed increases in data set sizes by several orders of magnitude. Brain tissue can now be imaged in 3D up to a millimeter in size at nanometer resolution, revealing tissue features from synapses to the mitochondria of all contained cells. These breakthroughs in acquisition technology were paralleled by a revolution in deep-learning segmentation techniques, that equally reduced manual analysis times by several orders of magnitude, to the point where fully automated reconstructions are becoming useful. Taken together, this gives neuroscientists now access to the first wiring diagrams of thousands of automatically reconstructed neurons connected by millions of synapses, just one line of program code away. In this talk, I will cover these developments by describing the past few years' technological breakthroughs and discuss remaining challenges. Finally, I will show the potential of automated connectomics for neuroscience by demonstrating how hypotheses in reinforcement learning can now be tackled through virtual experiments in synaptic wiring diagrams of the songbird basal ganglia.

SeminarNeuroscience

Theory of gating in recurrent neural networks

Kamesh Krishnamurthy
Princeton University
Sep 15, 2020

Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data, and also in neuroscience, to understand the emergent properties of networks of real neurons. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating i.e. multiplicative interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we develop a dynamical mean-field theory (DMFT) to study the consequences of gating in RNNs. We use random matrix theory to show how gating robustly produces marginal stability and line attractors – important mechanisms for biologically-relevant computations requiring long memory. The long-time behavior of the gated network is studied using its Lyapunov spectrum, and the DMFT is used to provide a novel analytical expression for the maximum Lyapunov exponent demonstrating its close relation to relaxation-time of the dynamics. Gating is also shown to give rise to a novel, discontinuous transition to chaos, where the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity), contrary to a seminal result for additive RNNs. Critical surfaces and regions of marginal stability in the parameter space are indicated in phase diagrams, thus providing a map for principled parameter choices for ML practitioners. Finally, we develop a field-theory for gradients that arise in training, by incorporating the adjoint sensitivity framework from control theory in the DMFT. This paves the way for the use of powerful field-theoretic techniques to study training/gradients in large RNNs.