causal understanding
Latest
A Game Theoretical Framework for Quantifying Causes in Neural Networks
Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.
Analogy as a Catalyst for Cumulative Cultural Evolution
Analogies, broadly defined, map novel concepts onto familiar concepts, making them essential for perception, reasoning, and communication. We argue that analogy-building served a critical role in the evolution of cumulative culture, by allowing humans to learn and transmit complex behavioural sequences that would otherwise be too cognitively demanding or opaque to acquire. The emergence of a protolanguage consisting of simple labels would have provided early humans with the cognitive tools to build explicit analogies and to communicate them to others. This focus on analogy-building can shed new light on the coevolution of cognition and culture, and addresses recent calls for better integration of the field of cultural evolution with cognitive science. This talk will address what cumulative cultural evolution is, how we define analogy-building, how analogy-building applies to cumulative cultural evolution, how analogy-building fits into language evolution, and the implications of analogy-building for causal understanding and cognitive evolution.
An Algorithmic Barrier to Neural Circuit Understanding
Neuroscience is witnessing extraordinary progress in experimental techniques, especially at the neural circuit level. These advances are largely aimed at enabling us to understand precisely how neural circuit computations mechanistically cause behavior. Establishing this type of causal understanding will require multiple perturbational (e.g optogenetic) experiments. It has been unclear exactly how many such experiments are needed and how this number scales with the size of the nervous system in question. Here, using techniques from Theoretical Computer Science, we prove that establishing the most extensive notions of understanding need exponentially-many experiments in the number of neurons, in many cases, unless a widely-posited hypothesis about computation is false (i.e. unless P = NP). Furthermore, using data and estimates, we demonstrate that the feasible experimental regime is typically one where the number of experiments performable scales sub-linearly in the number of neurons in the nervous system. This remarkable gulf between the worst-case and the feasible suggests an algorithmic barrier to such an understanding. Determining which notions of understanding are algorithmically tractable to establish in what contexts, thus, becomes an important new direction for investigation. TL; DR: Non-existence of tractable algorithms for neural circuit interrogation could pose a barrier to comprehensively understanding how neural circuits cause behavior. Preprint: https://biorxiv.org/content/10.1101/639724v1/…
causal understanding coverage
3 items