Correspondences
correspondences
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
Spatial alignment supports visual comparisons
Visual comparisons are ubiquitous, and they can also be an important source for learning (e.g., Gentner et al., 2016; Kok et al., 2013). In science, technology, engineering, and math (STEM), key information is often conveyed through figures, graphs, and diagrams (Mayer, 1993). Comparing within and across visuals is critical for gleaning insight into the underlying concepts, structures, and processes that they represent. This talk addresses how people make visual comparisons and how visual comparisons can be best supported to improve learning. In particular, the talk will present a series of studies exploring the Spatial Alignment Principle (Matlen et al., 2020), derived from Structure-Mapping Theory (Gentner, 1983). Structure-mapping theory proposes that comparisons involve a process of finding correspondences between elements based on structured relationships. The Spatial Alignment Principle suggests that spatially arranging compared figures directly – to support correct correspondences and minimize interference from incorrect correspondences – will facilitate visual comparisons. We find that direct placement can facilitate visual comparison in educationally relevant stimuli, and that it may be especially important when figures are less familiar. We also present complementary evidence illustrating the preponderance of visual comparisons in 7th grade science textbooks.
Perceptual and neural basis of sound-symbolic crossmodal correspondences
Probabilistic Analogical Mapping with Semantic Relation Networks
Hongjing Lu will present a new computational model of Probabilistic Analogical Mapping (PAM, in collaboration with Nick Ichien and Keith Holyoak) that finds systematic correspondences between inputs generated by machine learning. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distributed representations of individual concepts (word embeddings created by Word2vec) and of relations between concepts (created by our BART model). We have used PAM to simulate a broad range of phenomena involving analogical mapping by both adults and children. Our approach demonstrates that human-like analogical mapping can emerge from comparison mechanisms applied to rich semantic representations of individual concepts and relations. More details can be found https://arxiv.org/ftp/arxiv/papers/2103/2103.16704.pdf
Toward a Comprehensive Classification of Mouse Retinal Ganglion Cells: Morphology, Function, Gene Expression, and Central Projections
I will introduce a web portal for the retinal neuroscience community to explore the catalog of mouse retinal ganglion cell (RGC) types, including data on light responses, correspondences with morphological types in EyeWire, and gene expression data from single-cell transcriptomics. Our current classification includes 43 types, accounting for 90% of the cells in EyeWire. Many of these cell types have new stories to tell, and I will cover two of them that represent opposite ends of the spectrum of levels of analysis in my lab. First, I will introduce the “Bursty Suppressed-by-Contrast” RGC and show how its intrinsic properties rather than its synaptic inputs differentiate its function from that of a different well-known RGC type. Second, I will present the histogram of cell types that project to the Olivary Pretectal Nucleus, focusing on the recently discovered M6 ipRGC.