interpretation
Latest
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
Epilepsy genetics 2023: From research to advanced clinical genetic test interpretation
The presentation will provide an overview of the expanding role of genetic factors in epilepsy. It will delve into the fundamentals of this field and elucidate how digital tools and resources can aid in the re-evaluation of genetic test results. In the initial segment of the presentation, Dr. Lal will examine the advancements made over the past two decades regarding the genetic architecture of various epilepsy types. Additionally, he will present research studies in which he has actively participated, offering concrete examples. Subsequently, during the second part of the talk, Dr. Lal will share the ongoing research projects that focus on epilepsy genetics, bioinformatics, and health record data science.
Precise spatio-temporal spike patterns in cortex and model
The cell assembly hypothesis postulates that groups of coordinated neurons form the basis of information processing. Here, we test this hypothesis by analyzing massively parallel spiking activity recorded in monkey motor cortex during a reach-to-grasp experiment for the presence of significant ms-precise spatio-temporal spike patterns (STPs). For this purpose, the parallel spike trains were analyzed for STPs by the SPADE method (Stella et al, 2019, Biosystems), which detects, counts and evaluates spike patterns for their significance by the use of surrogates (Stella et al, 2022 eNeuro). As a result we find STPs in 19/20 data sets (each of 15min) from two monkeys, but only a small fraction of the recorded neurons are involved in STPs. To consider the different behavioral states during the task, we analyzed the data in a quasi time-resolved analysis by dividing the data into behaviorally relevant time epochs. The STPs that occur in the various epochs are specific to behavioral context - in terms of neurons involved and temporal lags between the spikes of the STP. Furthermore we find, that the STPs often share individual neurons across epochs. Since we interprete the occurrence of a particular STP as the signature of a particular active cell assembly, our interpretation is that the neurons multiplex their cell assembly membership. In a related study, we model these findings by networks with embedded synfire chains (Kleinjohann et al, 2022, bioRxiv 2022.08.02.502431).
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
The impact of emerging technologies and methods on the interpretation of genetic variation in autism and fetal genomics
The Secret Bayesian Life of Ring Attractor Networks
Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.
Learning in/about/from the basal ganglia
The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning. In this talk, I will discuss a framework for how this function might be performed and computational results using an upward mapping to identify putative low-dimensional control ensembles that may be involved in tuning decision policy. I will also present some recent experimental results and theory – related to effects of extracellular ion dynamics -- that run counter to the classical view of basal ganglia pathways and suggest a new interpretation of certain aspects of this framework. For those not so interested in the basal ganglia, I hope that the upward mapping approach and impact of extracellular ion dynamics will nonetheless be of interest!
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Connecting structure and function in early visual circuits
How does the brain interpret signals from the outside world? Walking through a park, you might take for granted the ease with which you can understand what you see. Rather than seeing a series of still snapshots, you are able to see simple, fluid movement — of dogs running, squirrels foraging, or kids playing basketball. You can track their paths and know where they are headed without much thought. “How does this process take place?” asks Rudy Behnia, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “For most of us, it’s hard to imagine a world where we can’t see motion, shapes, and color; where we can’t have a representation of the physical world in our head.” And yet this representation does not happen automatically — our brain has no direct connection with the outside world. Instead, it interprets information taken in by our senses. Dr. Behnia is studying how the brain builds these representations. As a starting point, she focuses on how we see motion
Analogical Reasoning with Neuro-Symbolic AI
Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.
Human-like scene interpretation by a brain-inspired model
Multimodal framework and fusion of EEG, graph theory and sentiment analysis for the prediction and interpretation of consumer decision
The application of neuroimaging methods to marketing has recently gained lots of attention. In analyzing consumer behaviors, the inclusion of neuroimaging tools and methods is improving our understanding of consumer’s preferences. Human emotions play a significant role in decision making and critical thinking. Emotion classification using EEG data and machine learning techniques has been on the rise in the recent past. We evaluate different feature extraction techniques, feature selection techniques and propose the optimal set of features and electrodes for emotion recognition.Affective neuroscience research can help in detecting emotions when a consumer responds to an advertisement. Successful emotional elicitation is a verification of the effectiveness of an advertisement. EEG provides a cost effective alternative to measure advertisement effectiveness while eliminating several drawbacks of the existing market research tools which depend on self-reporting. We used Graph theoretical principles to differentiate brain connectivity graphs when a consumer likes a logo versus a consumer disliking a logo. The fusion of EEG and sentiment analysis can be a real game changer and this combination has the power and potential to provide innovative tools for market research.
Separable pupillary signatures of perception and action during perceptual multistability
The pupil provides a rich, non-invasive measure of the neural bases of perception and cognition, and has been of particular value in uncovering the role of arousal-linked neuromodulation, which alters cortical processing as well as pupil size. But pupil size is subject to a multitude of influences, which complicates unique interpretation. We measured pupils of observers experiencing perceptual multistability -- an ever-changing subjective percept in the face of unchanging but inconclusive sensory input. In separate conditions the endogenously generated perceptual changes were either task-relevant or not, allowing a separation between perception-related and task-related pupil signals. Perceptual changes were marked by a complex pupil response that could be decomposed into two components: a dilation tied to task execution and plausibly indicative of an arousal-linked noradrenaline surge, and an overlapping constriction tied to the perceptual transient and plausibly a marker of altered visual cortical representation. Constriction, but not dilation, amplitude systematically depended on the time interval between perceptual changes, possibly providing an overt index of neural adaptation. These results show that the pupil provides a simultaneous reading on interacting but dissociable neural processes during perceptual multistability, and suggest that arousal-linked neuromodulation shapes action but not perception in these circumstances. This presentation covers work that was published in e-life
The self-consistent nature of visual perception
Vision provides us with a holistic interpretation of the world that is, with very few exceptions, coherent and consistent across multiple levels of abstraction, from scene to objects to features. In this talk I will present results from past and ongoing work in my laboratory that investigates the role top-down signals play in establishing such coherent perceptual experience. Based on the results of several psychophysical experiments I will introduce a theory of “self-consistent inference” and show how it can account for human perceptual behavior. The talk will close with a discussion of how the theory can help us understand more cognitive processes.
Looking and listening while moving
In this talk I’ll discuss our recent work on how visual and auditory cues to space are integrated as we move. There are at least 3 reasons why this turns out to be a difficult problem for the brain to solve (and us to understand!). First, vision and hearing start off in different coordinates (eye-centred vs head-centred), so they need a common reference frame in which to communicate. By preventing eye and head movements, this problem has been neatly sidestepped in the literature, yet self-movement is the norm. Second, self-movement creates visual and auditory image motion. Correct interpretation therefore requires some form of compensation. Third, vision and hearing encode motion in very different ways: vision contains dedicated motion detectors sensitive to speed, whereas hearing does not. We propose that some (all?) of these problems could be solved by considering the perception of audiovisual space as the integration of separate body-centred visual and auditory cues, the latter formed by integrating image motion with motor system signals and vestibular information. To test this claim, we use a classic cue integration framework, modified to account for cues that are biased and partially correlated. We find good evidence for the model based on simple judgements of audiovisual motion within a circular array of speakers and LEDs that surround the participant while they execute self-controlled head movement.
The role of high- and low-level factors in smooth pursuit of predictable and random motions
Smooth pursuit eye movements are among our most intriguing motor behaviors. They are able to keep the line of sight on smoothly moving targets with little or no overt effort or deliberate planning, and they can respond quickly and accurately to changes in the trajectory of motion of targets. Nevertheless, despite these seeming automatic characteristics, pursuit is highly sensitive to high-level factors, such as the choices made about attention, or beliefs about the direction of upcoming motion. Investigators have struggled for decades with the problem of incorporating both high- and low-level processes into a single coherent model. This talk will present an overview of the current state of efforts to incorporate high- and low-level influences, as well as new observations that add to our understanding of both types of influences. These observations (in contrast to much of the literature) focus on the directional properties of pursuit. Studies will be presented that show: (1) the direction of smooth pursuit made to pursue fields of noisy random dots depends on the relative reliability of the sensory signal and the expected motion direction; (2) smooth pursuit shows predictive responses that depend on the interpretation of cues that signal an impending collision; and (3) smooth pursuit during a change in target direction displays kinematic properties consistent with the well-known two-thirds power law. Implications for incorporating high- and low-level factors into the same framework will be discussed.
Analogy and ethics: opportunities at the intersection
Analogy offers a new interpretation of a common concern in ethics: whether decision making includes or excludes a consideration of moral issues. This is often discussed as the moral awareness of decision makers and considered a motivational concern. The possible new interpretation is that moral awareness is in part a matter of expertise. Some failures of moral awareness can then be understood as stemming from novicehood. Studies of analogical transfer are consistent with the possibility that moral awareness is in part a matter of expertise, that as a result motivation is less helpful than some prior theorizing would predict, and that many adults are not as expert in the domain of ethics as one might hope. The possibility of expert knowledge of ethical principles leads to new questions and opportunities.
Strong and weak principles of neural dimension reduction
Large-scale, single neuron resolution recordings are inherently high-dimensional, with as many dimensions as neurons. To make sense of them, for many the answer is: reduce the number of dimensions. In this talk I argue we can distinguish weak and strong principles of neural dimension reduction. The weak principle is that dimension reduction is a convenient tool for making sense of complex neural data. The strong principle is that dimension reduction moves us closer to how neural circuits actually operate and compute. Elucidating these principles is crucial, for which we subscribe to provides radically different interpretations of the same dimension reduction techniques applied to the same data. I outline experimental evidence for each principle, but illustrate how we could make either the weak or strong principles appear to be true based on innocuous looking analysis decisions. These insights suggest arguments over low and high-dimensional neural activity need better constraints from both experiment and theory.
Interpretation of SYNGAP1 Variants
Inclusive Data Science
A single person can be the source of billions of data points, whether these are generated from everyday internet use, healthcare records, wearable sensors or participation in experimental research. This vast amount of data can be used to make predictions about people and systems: what is the probability this person will develop diabetes in the next year? Will commit a crime? Will be a good employee? Is of a particular ethnicity? Predictions are simply represented by a number, produced by an algorithm. A single number in itself is not biased. How that number was generated, interpreted and subsequently used are all processes deeply susceptible to human bias and prejudices. This session will explore a philosophical perspective of data ethics and discuss practical steps to reducing statistical bias. There will be opportunity in the last section of the session for attendees to discuss and troubleshoot ethical questions from their own analyses in a ‘Data Clinic’.
Transforming task representations
Humans can adapt to a novel task on our first try. By contrast, artificial intelligence systems often require immense amounts of data to adapt. In this talk, I will discuss my recent work (https://www.pnas.org/content/117/52/32970) on creating deep learning systems that can adapt on their first try by exploiting relationships between tasks. Specifically, the approach is based on transforming a representation for a known task to produce a representation for the novel task, by inferring and then using a higher order function that captures a relationship between the tasks. This approach can be interpreted as a type of analogical reasoning. I will show that task transformation can allow systems to adapt to novel tasks on their first try in domains ranging from card games, to mathematical objects, to image classification and reinforcement learning. I will discuss the analogical interpretation of this approach, an analogy between levels of abstraction within the model architecture that I refer to as homoiconicity, and what this work might suggest about using deep-learning models to infer analogies more generally.
From genetics to neurobiology through transcriptomic data analysis
Over the past years, genetic studies have uncovered hundreds of genetic variants to be associated with complex brain disorders. While this really represents a big step forward in understanding the genetic etiology of brain disorders, the functional interpretation of these variants remains challenging. We aim to help with the functional characterization of variants through transcriptomic data analysis. For instance, we rely on brain transcriptome atlases, such as Allen Brain Atlases, to infer functional relations between genes. One example of this is the identification of signaling mechanisms of steroid receptors. Further, by integrating brain transcriptome atlases with neuropathology and neuroimaging data, we identify key genes and pathways associated with brain disorders (e.g. Parkinson's disease). With technological advances, we can now profile gene expression in single-cells at large scale. These developments have presented significant computational developments. Our lab focuses on developing scalable methods to identify cells in single-cell data through interactive visualization, scalable clustering, classification, and interpretable trajectory modelling. We also work on methods to integrate single-cell data across studies and technologies.
The structure of behavior entrained to long intervals
Interpretation of interval timing data generated from animal models is complicated by ostensible motivational effects which arise from the delay-to-reward imposed by interval timing tasks, as well as overlap between timed and non-timed responses. These factors become increasingly prevalent at longer intervals. To address these concerns, two adjustments to long interval timing tasks are proposed. First, subjects should be afforded with reinforced non-timing behaviors concurrent with timing. Second, subjects should initiate the onset of timed stimuli. Under these conditions, interference by extraneous behavior would be detected in the rate of concurrent non- timing behaviors, and changes in motivation would be detected in the rate at which timed stimuli are initiated. In a task with these characteristics, rats initiated a concurrent fixed-interval (FI) random-ratio (RR) schedule of reinforcement. This design facilitated response-initiated timing behavior, even at increasingly long delays. Pre-feeding manipulations revealed an effect on the number of initiated trials, but not on the timing peak function.
Precision and Temporal Stability of Directionality Inferences from Group Iterative Multiple Model Estimation (GIMME) Brain Network Models
The Group Iterative Multiple Model Estimation (GIMME) framework has emerged as a promising method for characterizing connections between brain regions in functional neuroimaging data. Two of the most appealing features of this framework are its ability to estimate the directionality of connections between network nodes and its ability to determine whether those connections apply to everyone in a sample (group-level) or just to one person (individual-level). However, there are outstanding questions about the validity and stability of these estimates, including: 1) how recovery of connection directionality is affected by features of data sets such as scan length and autoregressive effects, which may be strong in some imaging modalities (resting state fMRI, fNIRS) but weaker in others (task fMRI); and 2) whether inferences about directionality at the group and individual levels are stable across time. This talk will provide an overview of the GIMME framework and describe relevant results from a large-scale simulation study that assesses directionality recovery under various conditions and a separate project that investigates the temporal stability of GIMME’s inferences in the Human Connectome Project data set. Analyses from these projects demonstrate that estimates of directionality are most precise when autoregressive and cross-lagged relations in the data are relatively strong, and that inferences about the directionality of group-level connections, specifically, appear to be stable across time. Implications of these findings for the interpretation of directional connectivity estimates in different types of neuroimaging data will be discussed.
How our biases may influence our study of visual modalities: Two tales from the sea
It has long been appreciated (and celebrated) that certain species have sensory capabilities that humans do not share, for example polarization, ultraviolet, and infrared vision. What is less appreciated however, is that our position as terrestrial human scientists can significantly affect our study of animal senses and signals, even within modalities that we do share. For example, our acute vision can lead us to over-interpret the relevance of fine patterns in animals with coarser vision, and our Cartesian heritage as scientists can lead us to divide sensory modalities into orthogonal parameters (e.g. hue and brightness for color vision), even though this division may not exist within the animal itself. This talk examines two cases from marine visual ecology where a reconsideration of our biases as sharp-eyed Cartesian land mammals can help address questions in visual ecology. The first case examines the enormous variation in visual acuity among animals with image-forming eyes, and focuses on how acknowledging the typically poorer resolving power of animals can help us interpret the function of color patterns in cleaner shrimp and their client fish. The second case examines the how the typical human division of polarized light stimuli into angle and degree of polarization is problematic, and how a physiologically relevant interpretation is both closer to the truth and resolves a number of issues, particularly when considering the propagation of polarized light
Neurosexism and the brain: how gender stereotypes can distort or even damage research
The ‘Hunt the Sex Difference’ agenda has informed brain research for decades, if not centuries. This talk aims to demonstrate how a fixed belief in differences between ‘male’ and ‘female’ brains can narrow and even distort the research process. This can include the questions that are asked, the methodology selected and the analytical pipeline. It can also powerfully inform the interpretation of results and the ‘spin’ used in the public communication of such research.
Free Will and the COINTOB Model of Decision-Making
The COINTOB (conditional intention and integration to bound) model provides a heuristic framework of processes in Libet-style experiments. The model is based on three assumptions. First, brain activation preceding conscious intentions in Libet-style experiments does not reflect an unconscious decision but rather the unfolding of a decision process. Second, the time of conscious decision (W) reflects the moment in time when the decision boundary is crossed. This interpretation of W is consistent with our apparent intuition that we decide in the moment we experience the conscious intention to act. Third, the decision process is configured by conscious intentions that participants form at the beginning of the experiment based on the experimental instruction. Brass and Mele discuss the model, conceptual background for it, and the model’s bearing on free will.
Towards better interoceptive biomarkers in computational psychiatry
Empirical evidence and theoretical models both increasingly emphasize the importance of interoceptive processing in mental health. Indeed, many mood and psychiatric disorders involve disturbed feelings and/or beliefs about the visceral body. However, current methods to measure interoceptive ability are limited in a number of ways, restricting the utility and interpretation of interoceptive biomarkers in psychiatry. I will present some newly developed measures and models which aim to improve our understanding of disordered brain-body interaction in psychiatric illnesses.
Modelling affective biases in rodents: behavioural and computational approaches
My research focuses, broadly speaking, on how emotions impact decision making. Specifically, I am interested in affective biases, a phenomenon known to be important in depression. Using a rodent decision-making task, combined with computational modelling I have investigated how different antidepressant and pro-depressant manipulations that are known to alter mood in humans alter judgement bias, and provided insight into the decision processes that underlie these behaviours. I will also highlight how the combination of behaviour and modelling can provide a truly translation approach, enabling comparison and interpretation of the same cognitive processes between animal and human research.
Human cognitive biases and the role of dopamine
Cognitive bias is a "subjective reality" that is uniquely created in the brain and affects our various behaviors. It may lead to what is widely called irrationality in behavioral economics, such as inaccurate judgment and illogical interpretation, but it also has an adaptive aspect in terms of mental hygiene. When such cognitive bias is regarded as a product of information processing in the brain, the approach to clarify the mechanism in the brain will play a part in finding the direct relations between the brain and the mind. In my talk, I will introduce our studies investigating the neural and molecular bases of cognitive biases, especially focusing on the role of dopamine.
Making neural nets simple enough to succeed at universal relational generalization
Traditional brain-style (connectionist) approaches basically hit a wall when it comes to relational cognition. As an alternative to the well-known approaches of structured connectionism and deep learning, I present an engine for relational pattern recognition based on minimalist reinterpretations of first principles of connectionism. Results of computational experiments will be discussed on problems testing relational learning and universal generalization.
Human voluntary action: from thought to movement
The ability to decide and act autonomously is a distinctive feature of human cognition. From a motor neurophysiology viewpoint, these 'voluntary' actions can be distinguished by the lack of an obvious triggering sensory stimulus: the action is considered to be a product of thought, rather than a reflex result of a specific input. A reverse engineering approach shows that such actions are caused by neurons of the primary cortex, which in turn depend on medial frontal areas, and finally a combination of prefrontal cortical connections and subcortical drive from basal ganglia loops. One traditional marker of voluntary action is the EEG readiness potential (RP), recorded over the frontal cortex prior to voluntary actions. However, the interpretation of this signal remains controversial, and very few experimental studies have attempted to link the RP to the thought process that lead to voluntary action. In this talk, I will report new studies that show learning an internal model about the optimum delay at which to act influences the amplitude of the RP. More generally, a scientific understanding of voluntariness and autonomy will require new neurocognitive paradigms connecting thought and action.
It’s not what you look at that matters, it’s what you see
People frequently interpret the same information differently, based on their prior beliefs and views. This may occur in everyday settings, as when two friends are watching the same movie, but also in more consequential circumstances, such as when people interpret the same news differently based on their political views. The role of subjective knowledge in altering how the brain processes narratives has been explored mainly in controlled settings. I will present two projects that examines neural mechanisms underlying narrative interpretation “in the wild” -- how responses differ between two groups of people who interpret the same narrative in two coherent, but opposing ways. In the first project we manipulated participant’s prior knowledge to make them interpret the narrative differently, and found that responses in high-order areas, including the default mode network, language areas and subsets of the mirror neuron system, tend to be similar among people who share the same interpretation, but different from people with an opposing interpretation. In contrast to the active manipulation of participants’ interpretation in the first study, in the second (ongoing) project we examine these processes in a more ecological setting. Taking advantage of people’s natural tendencies to interpret the world through their own (political) filters, we examine these mechanisms while measuring their brain response to political movie clips. These studies are intended to deepen our understanding of the differences in subjective construal processes, by mapping their underlying brain mechanisms.
Model Selection in Sensory Data Interpretation
Bernstein Conference 2024
Supervised learning and interpretation of plasticity rules in spiking neural networks
COSYNE 2022
Supervised learning and interpretation of plasticity rules in spiking neural networks
COSYNE 2022
Semi-supervised quantification and interpretation of undirected human behavior
COSYNE 2023
interpretation coverage
38 items