← Back

Explanation

Topic spotlight
TopicWorld Wide

explanation

Discover seminars, jobs, and research tagged with explanation across World Wide.
42 curated items40 Seminars2 ePosters
Updated almost 2 years ago
42 items · explanation
42 results
SeminarNeuroscience

Using Adversarial Collaboration to Harness Collective Intelligence

Lucia Melloni
Max Planck Institute for Empirical Aesthetics
Jan 24, 2024

There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.

SeminarNeuroscience

From spikes to factors: understanding large-scale neural computations

Mark M. Churchland
Columbia University, New York, USA
Apr 5, 2023

It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 21, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscienceRecording

A multi-level account of hippocampal function in concept learning from behavior to neurons

Rob Mok
University of Cambridge
Nov 1, 2022

A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 4, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscienceRecording

A model of colour appearance based on efficient coding of natural images

Jolyon Troscianko
University of Exeter
Jul 17, 2022

An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.

SeminarPsychology

Do we measure what we think we are measuring?

Dario Alejandro Gordillo Lopez
EPFL
Jul 13, 2022

Tests used in the empirical sciences are often (implicitly) assumed to be representative of a target mechanism in the sense that similar tests should lead to similar results. In this talk, using resting-state electroencephalogram (EEG) as an example, I will argue that this assumption does not necessarily hold true. Typically EEG studies are conducted selecting one analysis method thought to be representative of the research question asked. Using multiple methods, we extracted a variety of features from a single resting-state EEG dataset and conducted correlational and case-control analyses. We found that many EEG features revealed a significant effect in the case-control analyses. Similarly, EEG features correlated significantly with cognitive tasks. However, when we compared these features pairwise, we did not find strong correlations. A number of explanations to these results will be discussed.

SeminarNeuroscienceRecording

From the Didactic to the Heuristic Use of Analogies in Science Teaching

Nikolaos Fotou
University of Lincoln
Jun 15, 2022

Extensive research on science teaching has shown the effectiveness of analogies as a didactic tool which, when appropriately and effectively used, facilitates the learning process of abstract concepts. This seminar does not contradict the efficacy of such a didactic use of analogies in this seminar but switches attention and interest on their heuristic use in approaching and understanding of what previously unknown. Such a use of analogies derives from research with 10 to 17 year-olds, who, when asked to make predictions in novel situations and to then provide explanations about these predictions, they self-generated analogies and used them by reasoning on their basis. This heuristic use of analogies can be used in science teaching in revealing how students approach situations they have not considered before as well as the sources they draw upon in doing so.

SeminarNeuroscience

The 15th David Smith Lecture in Anatomical Neuropharmacology: Professor Tim Bliss, "Memories of long term potentiation

Tim Bliss
Visiting Professor at UCL and the Frontier Institutes of Science and Technology, Xi’an Jiaotong University, China
Jun 13, 2022

The David Smith Lectures in Anatomical Neuropharmacology, Part of the 'Pharmacology, Anatomical Neuropharmacology and Drug Discovery Seminars Series', Department of Pharmacology, University of Oxford. The 15th David Smith Award Lecture in Anatomical Neuropharmacology will be delivered by Professor Tim Bliss, Visiting Professor at UCL and the Frontier Institutes of Science and Technology, Xi’an Jiaotong University, China, and is hosted by Professor Nigel Emptage. This award lecture was set up to celebrate the vision of Professor A David Smith, namely, that explanations of the action of drugs on the brain requires the definition of neuronal circuits, the location and interactions of molecules. Tim Bliss gained his PhD at McGill University in Canada. He joined the MRC National Institute for Medical Research in Mill Hill, London in 1967, where he remained throughout his career. His work with Terje Lømo in the late 1960’s established the phenomenon of long-term potentiation (LTP) as the dominant synaptic model of how the mammalian brain stores memories. He was elected as a Fellow of the Royal Society in 1994 and is a founding fellow of the Academy of Medical Sciences. He shared the Bristol Myers Squibb award for Neuroscience with Eric Kandel in 1991, the Ipsen Prize for Neural Plasticity with Richard Morris and Yadin Dudai in 2013. In May 2012 he gave the annual Croonian Lecture at the Royal Society on ‘The Mechanics of Memory’. In 2016 Tim, with Graham Collingridge and Richard Morris shared the Brain Prize, one of the world's most coveted science prizes. Abstract: In 1966 there appeared in Acta Physiologica Scandinavica an abstract of a talk given by Terje Lømo, a PhD student in Per Andersen’s laboratory at the University of Oslo. In it Lømo described the long-lasting potentiation of synaptic responses in the dentate gyrus of the anaesthetised rabbit that followed repeated episodes of 10-20Hz stimulation of the perforant path. Thus, heralded and almost entirely unnoticed, one of the most consequential discoveries of 20th century neuroscience was ushered into the world. Two years later I arrived in Oslo as a visiting post-doc from the National Institute for Medical Research in Mill Hill, London. In this talk I recall the events that led us to embark on a systematic reinvestigation of the phenomenon now known as long-term potentiation (LTP) and will then go on to describe the discoveries and controversies that enlivened the early decades of research into synaptic plasticity in the mammalian brain. I will end with an observer’s view of the current state of research in the field, and what we might expect from it in the future.

SeminarNeuroscienceRecording

Canonical neural networks perform active inference

Takuya Isomura
RIKEN CBS
Jun 9, 2022

The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.

SeminarNeuroscienceRecording

Robustness in spiking networks: a geometric perspective

Christian Machens
Champalimaud Center, Lisboa
Feb 15, 2022

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.

SeminarNeuroscience

A novel form of retinotopy in area V2 highlights location-dependent feature selectivity in the visual system

Madineh Sedigh-Sarvestani
Max Planck Florida Institute for Neuroscience
Jan 18, 2022

Topographic maps are a prominent feature of brain organization, reflecting local and large-scale representation of the sensory surface. ​​Traditionally, such representations in early visual areas are conceived as retinotopic maps preserving ego-centric retinal spatial location while ensuring that other features of visual input are uniformly represented for every location in space. I will discuss our recent findings of a striking departure from this simple mapping in the secondary visual area (V2) of the tree shrew that is best described as a sinusoidal transformation of the visual field. This sinusoidal topography is ideal for achieving uniform coverage in an elongated area like V2 as predicted by mathematical models designed for wiring minimization, and provides a novel explanation for stripe-like patterns of intra-cortical connections and functional response properties in V2. Our findings suggest that cortical circuits flexibly implement solutions to sensory surface representation, with dramatic consequences for large-scale cortical organization. Furthermore our work challenges the framework of relatively independent encoding of location and features in the visual system, showing instead location-dependent feature sensitivity produced by specialized processing of different features in different spatial locations. In the second part of the talk, I will propose that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual input, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information. The relevant papers can be found here: V2 retinotopy (Sedigh-Sarvestani et al. Neuron, 2021) Location-dependent feature sensitivity (Sedigh-Sarvestani et al. Under Review, 2022)

SeminarNeuroscience

The processing of price during purchase decision making: Are there neural differences among prosocial and non-prosocial consumers?

Anna Shepelenko
HSE University
Dec 8, 2021

International organizations, governments and companies are increasingly committed to developing measures that encourage adoption of sustainable consumption patterns among the population. However, their success requires a deep understanding of the everyday purchasing decision process and the elements that shape it. Price is an element that stands out. Prior research concluded that the influence of price on purchase decisions varies across consumer profiles. Yet no consumer behavior study to date has assessed the differences of price processing among consumers adopting sustainable habits (prosocial) as opposed to those who have not (non-prosocial). This is the first study to resort to neuroimaging tools to explore the underlying neural mechanisms that reveal the effect of price on prosocial and non-prosocial consumers. Self-reported findings indicate that prosocial consumers place greater value on collective costs and benefits while non-prosocial consumers place a greater weight on price. The neural data gleaned from this analysis offers certain explanations as to the origin of the differences. Non-prosocial (vs. prosocial) consumers, in fact, exhibit a greater activation in brain areas involved with reward, valuation and choice when evaluating price information. These findings could steer managers to improve market segmentation and assist institutions in their design of campaigns fostering environmentally sustainable behaviors

SeminarNeuroscienceRecording

NMC4 Short Talk: Neural Representation: Bridging Neuroscience and Philosophy

Andrew Richmond (he/him)
Columbia University
Dec 1, 2021

We understand the brain in representational terms. E.g., we understand spatial navigation by appealing to the spatial properties that hippocampal cells represent, and the operations hippocampal circuits perform on those representations (Moser et al., 2008). Philosophers have been concerned with the nature of representation, and recently neuroscientists entered the debate, focusing specifically on neural representations. (Baker & Lansdell, n.d.; Egan, 2019; Piccinini & Shagrir, 2014; Poldrack, 2020; Shagrir, 2001). We want to know what representations are, how to discover them in the brain, and why they matter so much for our understanding of the brain. Those questions are framed in a traditional philosophical way: we start with explanations that use representational notions, and to more deeply understand those explanations we ask, what are representations — what is the definition of representation? What is it for some bit of neural activity to be a representation? I argue that there is an alternative, and much more fruitful, approach. Rather than asking what representations are, we should ask what the use of representational *notions* allows us to do in neuroscience — what thinking in representational terms helps scientists do or explain. I argue that this framing offers more fruitful ground for interdisciplinary collaboration by distinguishing the philosophical concerns that have a place in neuroscience from those that don’t (namely the definitional or metaphysical questions about representation). And I argue for a particular view of representational notions: they allow us to impose the structure of one domain onto another as a model of its causal structue. So, e.g., thinking about the hippocampus as representing spatial properties is a way of taking structures in those spatial properties, and projecting those structures (and algorithms that would implement them) them onto the brain as models of its causal structure.

SeminarNeuroscienceRecording

NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony

Scott Rich
Kremibl Brain Institute
Nov 30, 2021

A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.

SeminarNeuroscience

The processing of price during purchase decision making: Are there neural differences among prosocial and non-prosocial consumers?

Anna Shepelenko
HSE University
Oct 13, 2021

International organizations, governments and companies are increasingly committed to developing measures that encourage adoption of sustainable consumption patterns among the population. However, their success requires a deep understanding of the everyday purchasing decision process and the elements that shape it. Price is an element that stands out. Prior research concluded that the influence of price on purchase decisions varies across consumer profiles. Yet no consumer behavior study to date has assessed the differences of price processing among consumers adopting sustainable habits (prosocial) as opposed to those who have not (non-prosocial). This is the first study to resort to neuroimaging tools to explore the underlying neural mechanisms that reveal the effect of price on prosocial and non-prosocial consumers. Self-reported findings indicate that prosocial consumers place greater value on collective costs and benefits while non-prosocial consumers place a greater weight on price. The neural data gleaned from this analysis offers certain explanations as to the origin of the differences. Non-prosocial (vs. prosocial) consumers, in fact, exhibit a greater activation in brain areas involved with reward, valuation and choice when evaluating price information. These findings could steer managers to improve market segmentation and assist institutions in their design of campaigns fostering environmentally sustainable behaviors

SeminarNeuroscienceRecording

“From the Sublime to the Stomatopod: the story from beginning to nowhere near the end.”

Justin Marshall
University of Queensland
Jul 11, 2021

“Call me a marine vision scientist. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see what animals see in the watery part of the world. It is a way I have of dividing off the spectrum, and regulating circular polarisation.” Sometimes I wish I had just set out to harpoon a white whale as it would have been easier than studying stomatopod (mantis shrimp) vision. Nowhere near as much fun of course and certainly less dangerous so in this presentation I track the history of discovery and confusion that stomatopods deliver in trying to understand what the do actually see. The talk unashamedly borrows from that of Mike Bok a few weeks ago (April 13th 2021 “The Blurry Beginnings: etc” talk) as an introduction to the system (do go look at his talk again, it is beautiful!) and goes both backwards and forwards in time, trying to provide an explanation for the design of this visual system. The journey is again one of retinal anatomy and physiology, neuroanatomy, electrophysiology, behaviour and body ornaments but this time focusses more on polarisation vision (Mike covered the colour stuff well). There is a comparative section looking at the cephalopods too and by the end, I hope you will understand where we are at with trying to understand this extraordinary way of seeing the world and why we ‘pod-people’ wave our arms around so much when asked to explain; what do stomatopods see? Maybe, to butcher another quote: “mantis shrimp have been rendered visually beautiful for vision’s sake.”

SeminarPsychology

Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations

Brad Wyble
Penn State University
Jul 6, 2021

Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.

SeminarNeuroscience

Agency through Physical Lenses

Jenann Ismael
Columbia University
Jun 23, 2021

I will offer a broad-brush account of what explains the emergence of agents from a physics perspective, what sorts of conditions have to be in place for them to arise, and the essential features of agents when they are viewed through the lenses of physics. One implication will be a tight link to informational asymmetries associated with the thermodynamic gradient. Another will be a reversal of the direction of explanation from the one that is usually assumed in physical discussions. In in an evolved system, while it is true in some sense that the macroscopic behavior is the way it is because of the low-level dynamics, there is another sense in which the low-level dynamics is the way that it is because of the high-level behavior it supports. (More precisely and accurately, the constraints on the configuration of its components that define system as the kind of system it is are the way they are to exploit the low-level dynamics to produce the emergent behavior.) Another will be some insight into what might make human agency special.

SeminarNeuroscienceRecording

Comparing Multiple Strategies to Improve Mathematics Learning and Teaching

Bethany Rittle-Johnson
Vanderbilt University
May 19, 2021

Comparison is a powerful learning process that improves learning in many domains. For over 10 years, my colleagues and I have researched how we can use comparison to support better learning of school mathematics within classroom settings. In 5 short-term experimental, classroom-based studies, we evaluated comparison of solution methods for supporting mathematics knowledge and tested whether prior knowledge impacted effectiveness. We next developed supplemental Algebra I curriculum and professional development for teachers to integrate Comparison and Explanation of Multiple Strategies (CEMS) in their classrooms and tested the promise of the approach when implemented by teachers in two studies. Benefits and challenges emerged in these studies. I will conclude with evidence-based guidelines for effectively supporting comparison and explanation in the classroom. Overall, this program of research illustrates how cognitive science research can guide the design of effective educational materials as well as challenges that occur when bridging from cognitive science research to classroom instruction.

SeminarNeuroscience

Meta-analytic evidence of differential prefrontal and early sensory cortex activity during non-social sensory perception in autism

Nazia Jassim
University of Cambridge
May 18, 2021

To date, neuroimaging research has had a limited focus on non-social features of autism. As a result, neurobiological explanations for atypical sensory perception in autism are lacking. To address this, we quantitively condensed findings from the non-social autism fMRI literature in line with the current best practices for neuroimaging meta-analyses. Using activation likelihood estimation (ALE), we conducted a series of robust meta-analyses across 83 experiments from 52 fMRI studies investigating differences between autistic (n = 891) and typical (n = 967) participants. We found that typical controls, compared to autistic people, show greater activity in the prefrontal cortex (BA9, BA10) during perception tasks. More refined analyses revealed that, when compared to typical controls, autistic people show greater recruitment of the extrastriate V2 cortex (BA18) during visual processing. Taken together, these findings contribute to our understanding of current theories of autistic perception, and highlight some of the challenges of cognitive neuroscience research in autism.

SeminarPhysics of LifeRecording

Microorganism locomotion in viscoelastic fluids

Becca Thomases
University of California Davis
May 11, 2021

Many microorganisms and cells function in complex (non-Newtonian) fluids, which are mixtures of different materials and exhibit both viscous and elastic stresses. For example, mammalian sperm swim through cervical mucus on their journey through the female reproductive tract, and they must penetrate the viscoelastic gel outside the ovum to fertilize. In micro-scale swimming the dynamics emerge from the coupled interactions between the complex rheology of the surrounding media and the passive and active body dynamics of the swimmer. We use computational models of swimmers in viscoelastic fluids to investigate and provide mechanistic explanations for emergent swimming behaviors. I will discuss how flexible filaments (such as flagella) can store energy from a viscoelastic fluid to gain stroke boosts due to fluid elasticity. I will also describe 3D simulations of model organisms such as C. Reinhardtii and mammalian sperm, where we use experimentally measured stroke data to separate naturally coupled stroke and fluid effects. We explore why strokes that are adapted to Newtonian fluid environments might not do well in viscoelastic environments.

SeminarNeuroscience

Understanding "why": The role of causality in cognition

Tobias Gerstenberg
Stanford University
Apr 27, 2021

Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.

SeminarNeuroscienceRecording

Exploring the relationship between the LFP signal and Behavioral States

Condrado Bosman
Cognitive and Systems Neuroscience Group, University of Amsterdam
Mar 16, 2021

This talk will focus on different aspects of the Local Field Potential (LFP) signal. Classically, LFP fluctuations are related to changes in the functional state of the cortex. Yet, the mechanisms linking LFP changes with the state of the cortex are not well understood. The presentation will start with a brief explanation of the main oscillatory components of the LFP signal, how these different oscillatory components are generated at cortical microcircuits, and how their dynamics can be studied across multiple areas. Thereafter, a case study of a patient with akinetic mutism will be presented, linking cortical states with the behavior of the patient, as well as some preliminary results about how the LF cortical microcircuit dynamic changes modulate different cortical states and how these changes are reflected in the LFP signal

SeminarNeuroscience

Life of Pain and Pleasure

Irene Tracey
University of Oxford
Mar 9, 2021

The ability to experience pain is old in evolutionary terms. It is an experience shared across species. Acute pain is the body’s alarm system, and as such it is a good thing. Pain that persists beyond normal tissue healing time (3-4 months) is defined as chronic – it is the system gone wrong and it is not a good thing. Chronic pain has recently been classified as both a symptom and disease in its own right. It is one of the largest medical health problems worldwide with one in five adults diagnosed with the condition. The brain is key to the experience of pain and pain relief. This is the place where pain emerges as a perception. So, relating specific brain measures using advanced neuroimaging to the change patients describe in their pain perception induced by peripheral or central sensitization (i.e. amplification), psychological or pharmacological mechanisms has tremendous value. Identifying where amplification or attenuation processes occur along the journey from injury to the brain (i.e. peripheral nerves, spinal cord, brainstem and brain) for an individual and relating these neural mechanisms to specific pain experiences, measures of pain relief, persistence of pain states, degree of injury and the subject's underlying genetics, has neuroscientific and potential diagnostic relevance. This is what neuroimaging has afforded – a better understanding and explanation of why someone’s pain is the way it is. We can go ‘behind the scenes’ of the subjective report to find out what key changes and mechanisms make up an individual’s particular pain experience. A key area of development has been pharmacological imaging where objective evidence of drugs reaching the target and working can be obtained. We even now understand the mechanisms of placebo analgesia – a powerful phenomenon known about for millennia. More recently, researchers have been investigating through brain imaging whether there is a pre-disposing vulnerability in brain networks towards developing chronic pain. So, advanced neuroimaging studies can powerfully aid explanation of a subject’s multidimensional pain experience, pain relief (analgesia) and even what makes them vulnerable to developing chronic pain. The application of this goes beyond the clinic and has relevance in courts of law, and other areas of society, such as in veterinary care. Relatively far less work has been directed at understanding what changes in the brain occur during altered states of consciousness induced either endogenously (e.g. sleep) or exogenously (e.g. anaesthesia). However, that situation is changing rapidly. Our recent multimodal neuroimaging work explores how anaesthetic agents produce altered states of consciousness such that perceptual experiences of pain and awareness are degraded. This is bringing us fascinating insights into the complex phenomenon of anaesthesia, consciousness and even the concept of self-hood. These topics will be discussed in my talk alongside my ‘side-story’ of life as a scientist combining academic leadership roles with doing science and raising a family.

SeminarNeuroscience

The properties of large receptive fields as explanation of ensemble statistical representation: A population coding model

Igor Utochkin
National Research University Higher School of Economics
Feb 1, 2021

no

SeminarNeuroscienceRecording

Preschoolers' Comprehension of Functional Metaphors

Rebecca Zhu
University of California, Berkeley
Dec 9, 2020

Previous work suggests that children’s ability to understand metaphors emerges late in development. Researchers argue that children’s initial failure to understand metaphors is due to an inability to reason about shared relational structures between concepts. However, recent work demonstrates that preschoolers, toddlers, and even infants are already capable of relational reasoning. Might preschoolers also be capable of understanding metaphors, given more sensitive experimental paradigms? I explore whether preschoolers (N = 200, ages 4-5) understand functional metaphors, namely metaphors based on functional similarities. In Experiment 1a, preschoolers rated functional metaphors (e.g. “Roofs are hats”; “Clouds are sponges”) as “smarter” than nonsense statements. In Experiment 1b, adults (N = 48) also rated functional metaphors as “smarter” than nonsense statements (e.g. “Dogs are scissors”; “Boats are skirts”). In Experiment 2, preschoolers preferred functional explanations (e.g. “Both hold water”) over perceptual explanations (e.g. “Both are fluffy”) when interpreting a functional metaphor (e.g. “Clouds are sponges”). In Experiment 3, preschoolers preferred functional metaphors over nonsense statements in a dichotomous-choice task. Overall, this work demonstrates preschoolers’ early-emerging ability to understand functional metaphors.

SeminarNeuroscienceRecording

Attentional Foundations of Framing Effects

Ernst Fehr
University of Zurich
Dec 2, 2020

Framing effects in individual decision-making have puzzled economists for decades because they are hard, if at all, to explain with rational choice theories. Why should mere changes in the description of a choice problem affect decision-making? Here, we examine the hypothesis that changes in framing cause changes in the allocation of attention to the different options – measured via eye-tracking – and give rise to changes in decision-making. We document that the framing of a sure alternative as a gain – as opposed to a loss – in a risk-taking task increases the attentional advantage of the sure option and induces a higher choice frequency of that option – a finding that is predicted by the attentional drift-diffusion model (aDDM). The model also correctly predicts other key findings such as that the increased attentional advantage of the sure option in the gain frame should also lead quicker decisions in this frame. In addition, the data reveal that increasing risk aversion at higher stake sizes may also be driven by attentional processes because the sure option receives significantly more attention – regardless of frame – at higher stakes. We also corroborate the causal impact of framing-induced changes of attention on choice with an additional experiment that manipulates attention exogenously. Finally, to study the precise mechanisms underlying the framing effect we structurally estimate an aDDM that allows for frame and option-dependent parameters. The estimation results indicate that – in addition to the direct effects of framing-induced changes in attention on choice – the gain frame also causes (i) an increase in the attentional discount of the gamble and (ii) an increased concavity of utility. Our findings suggest that the traditional explanation of framing effects in risky choice in terms of a more concave value function in the gain domain is seriously incomplete and that attentional mechanisms as hypothesized in the aDDM play a key role.

SeminarNeuroscience

A computational explanation for domain specificity in the human brain

Katharina Dobs
University Giessen
Nov 24, 2020

Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design strategy for brains, and what is the origin of domain specificity in the first place? In this talk, I will present recent work testing whether the segregation of face and object perception in human brains emerges naturally from an optimization for both tasks. We trained artificial neural networks on face and object recognition, and found that networks were able to perform both tasks well by spontaneously segregating them into distinct pathways. Critically, networks neither had prior knowledge nor any inductive bias about the tasks. Furthermore, networks optimized on tasks which apparently do not develop specialization in the human brain, such as food or cars, and object categorization showed less task segregation. These results suggest that functional segregation can spontaneously emerge without a task-specific bias, and that the domain-specific organization of the cortex may reflect a computational optimization for the real-world tasks humans solve.

SeminarPhysics of LifeRecording

Is there universality in biology?

Nigel Goldenfeld
Massachusetts General Hospital and Brigham & Women's Hospital
Oct 29, 2020

It is sometimes said that there are two reasons why physics is so successful as a science. One is that it deals with very simple problems. The other is that it attempts to account only for universal aspects of systems at a desired level of description, with lower level phenomena subsumed into a small number of adjustable parameters. It is a widespread belief that this approach seems unlikely to be useful in biology, which is intimidatingly complex, where “everything has an exception”, and where there are a huge number of undetermined parameters. I will try to argue, nonetheless, that there are important, experimentally-testable aspects of biology that exhibit universality, and should be amenable to being tackled from a physics perspective. My suggestion is that this can lead to useful new insights into the existence and universal characteristics of living systems. I will try to justify this point of view by contrasting the goals and practices of the field of condensed matter physics with materials science, and then by extension, the goals and practices of the newly emerging field of “Physics of Living Systems” with biology. Specific biological examples that I will discuss include the following: Universal patterns of gene expression in cell biology Universal scaling laws in ecosystems, including the species-area law, Kleiber’s law, Paradox of the Plankton Universality of the genetic code Universality of thermodynamic utilization in microbial communities Universal scaling laws in the tree of life The question of what can be learned from studying universal phenomena in biology will also be discussed. Universal phenomena, by their very nature, shed little light on detailed microscopic levels of description. Yet there is no point in seeking idiosyncratic mechanistic explanations for phenomena whose explanation is found in rather general principles, such as the central limit theorem, that every microscopic mechanism is constrained to obey. Thus, physical perspectives may be better suited to answering certain questions such as universality than traditional biological perspectives. Concomitantly, it must be recognized that the identification and understanding of universal phenomena may not be a good answer to questions that have traditionally occupied biological scientists. Lastly, I plan to talk about what is perhaps the central question of universality in biology: why does the phenomenon of life occur at all? Is it an inevitable consequence of the laws of physics or some special geochemical accident? What methodology could even begin to answer this question? I will try to explain why traditional approaches to biology do not aim to answer this question, by comparing with our understanding of superconductivity as a physical phenomenon, and with the theory of universal computation. References Nigel Goldenfeld, Tommaso Biancalani, Farshid Jafarpour. Universal biology and the statistical mechanics of early life. Phil. Trans. R. Soc. A 375, 20160341 (14 pages) (2017). Nigel Goldenfeld and Carl R. Woese. Life is Physics: evolution as a collective phenomenon far from equilibrium. Ann. Rev. Cond. Matt. Phys. 2, 375-399 (2011).

SeminarNeuroscienceRecording

A New Approach to the Hard Problem of Consciousness

Mark Solms
Neuroscience Institute, University of Cape Town
Jul 28, 2020

David Chalmers’s (1995) hard problem famously states: “It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises.” Thomas Nagel (1974) wrote something similar: “If we acknowledge that a physical theory of mind must account for the subjective character of experience, we must admit that no presently available conception gives us a clue about how this could be done.” This presentation will point the way towards the long-sought “good explanation” -- or at least it will provide “a clue”. I will make three points: (1) It is unfortunate that cognitive science took vision as its model example when looking for a ‘neural correlate of consciousness’ because cortical vision (like most cognitive processes) is not intrinsically conscious. There is not necessarily ‘something it is like’ to see. (2) Affective feeling, by contrast, is conscious by definition. You cannot feel something without feeling it. Moreover, affective feeling, generated in the upper brainstem, is the foundational form of consciousness: prerequisite for all the higher cognitive forms. (3) The functional mechanism of feeling explains why and how it cannot go on ‘in the dark’, free of any inner feel. Affect enables the organism to monitor deviations from its expected self-states in uncertain situations and thereby frees homeostasis from the limitations of automatism. As Nagel says, “An organism has conscious mental states if and only if there is something that it is like to be that organism—something it is like for the organism.” Affect literally constitutes the sentient subject.

SeminarNeuroscienceRecording

The butterfly strikes back: neurons doing 'network' computation

Upinder Singh Bhalla
National Centre for Biological Sciences of the Tata Institute of Fundamental Research.
May 28, 2020

We live in the age of the network: Internet social neural ecosystems. This has become one of the main metaphors for how we think about complex systems. This view also dominates the account of brain function. The role of neuronsdescribed by Cajal as the "butterflies of the soul" has become diminished to leaky integrate-and-fire point objects in many models of neural network computation. It is perhaps not surprising that networkexplanations of neural phenomena use neurons as elementary particles andascribe all their wonderful capabilities to their interactions in a network. In the network view the Connectome defines the brain and the butterflies have no role. In this talk I'd like to reclaim some key computations from the networkand return them to their rightful place at the cellular and subcellular level. I'll start with a provocative look at potential computational capacity ofdifferent kinds of brain computation: network vs. subcellular. I'll then consider different levels of pattern and sequence computationwith a glimpse of the efficiency of the subcellular solutions. Finally I propose that there is a suggestive mapping between entire nodesof deep networks to individual neurons. This in my view is how we can walk around with 1.3 litres and 20 watts of installed computational capacity still doing far more than giant AI server farms.

ePoster

An Anatomical Explanation of the Inverted Face Effect

Garrison Cottrell, Shubham Kulkarni, Martha Gahl

COSYNE 2025

ePoster

Explainable AI for higher cognitive functions: How to provide explanations in the face of increasing complexity

Rena Bayramova

Neuromatch 5