Words
words
What it’s like is all there is: The value of Consciousness
Over the past thirty years or so, cognitive neuroscience has made spectacular progress understanding the biological mechanisms of consciousness. Consciousness science, as this field is now sometimes called, was not only inexistent thirty years ago, but its very name seemed like an oxymoron: how can there be a science of consciousness? And yet, despite this scepticism, we are now equipped with a rich set of sophisticated behavioural paradigms, with an impressive array of techniques making it possible to see the brain in action, and with an ever-growing collection of theories and speculations about the putative biological mechanisms through which information processing becomes conscious. This is all good and fine, even promising, but we also seem to have thrown the baby out with the bathwater, or at least to have forgotten it in the crib: consciousness is not just mechanisms, it’s what it feels like. In other words, while we know thousands of informative studies about access-consciousness, we have little in the way of phenomenal consciousness. But that — what it feels like — is truly what “consciousness” is about. Understanding why it feels like something to be me and nothing (panpsychists notwithstanding) for a stone to be a stone is what the field has always been after. However, while it is relatively easy to study access-consciousness through the contrastive approach applied to reports, it is much less clear how to study phenomenology, its structure and its function. Here, I first overview work on what consciousness does (the "how"). Next, I ask what difference feeling things makes and what function phenomenology might play. I argue that subjective experience has intrinsic value and plays a functional role in everything that we do.
Memory formation in hippocampal microcircuit
The centre of memory is the medial temporal lobe (MTL) and especially the hippocampus. In our research, a more flexible brain-inspired computational microcircuit of the CA1 region of the mammalian hippocampus was upgraded and used to examine how information retrieval could be affected under different conditions. Six models (1-6) were created by modulating different excitatory and inhibitory pathways. The results showed that the increase in the strength of the feedforward excitation was the most effective way to recall memories. In other words, that allows the system to access stored memories more accurately.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
The quest for brain identification
In the 17th century, physician Marcello Malpighi observed the existence of distinctive patterns of ridges and sweat glands on fingertips. This was a major breakthrough, and originated a long and continuing quest for ways to uniquely identify individuals based on fingerprints, a technique massively used until today. It is only in the past few years that technologies and methodologies have achieved high-quality measures of an individual’s brain to the extent that personality traits and behavior can be characterized. The concept of “fingerprints of the brain” is very novel and has been boosted thanks to a seminal publication by Finn et al. in 2015. They were among the firsts to show that an individual’s functional brain connectivity profile is both unique and reliable, similarly to a fingerprint, and that it is possible to identify an individual among a large group of subjects solely on the basis of her or his connectivity profile. Yet, the discovery of brain fingerprints opened up a plethora of new questions. In particular, what exactly is the information encoded in brain connectivity patterns that ultimately leads to correctly differentiating someone’s connectome from anybody else’s? In other words, what makes our brains unique? In this talk I am going to partially address these open questions while keeping a personal viewpoint on the subject. I will outline the main findings, discuss potential issues, and propose future directions in the quest for identifiability of human brain networks.
Connectome-based models of neurodegenerative disease
Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.
Dyslexias in words and numbers
Prosody in the voice, face, and hands changes which words you hear
Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.
Learning through the eyes and ears of a child
Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.
A possible role of the posterior alpha as a railroad switcher between dorsal and ventral pathways
Suppose you are on your favorite touchscreen device consciously and deliberately deciding emails to read or delete. In other words, you are consciously and intentionally looking, tapping, and swiping. Now suppose that you are doing this while neuroscientists are recording your brain activity. Eventually, the neuroscientists are familiar enough with your brain activity and behavior that they run an experiment with subliminal cues which reveals that your looking, tapping, and swiping seem to be determined by a random switch in your brain. You are not aware of it, or its impact on your decisions or movements. Would these predictions undermine your sense of free will? Some have argued that it should. Although this inference from unreflective and/or random intention mechanisms to free will skepticism, may seem intuitive at first, there are already objections to it. So, even if this thought experiment is plausible, it may not actually undermine our sense of free will.
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
Epigenome regulation in neocortex expansion and generation of neuronal subtypes
Evolutionarily, the expansion of the human neocortex accounts for many of the unique cognitive abilities of humans. This expansion appears to reflect the increased proliferative potential of basal progenitors (BPs) in mammalian evolution. Further cortical progenitors generate both glutamatergic excitatory neurons (ENs) and GABAergic inhibitory interneurons (INs) in human cortex, whereas they produce exclusively ENs in rodents. The increased proliferative capacity and neuronal subtype generation of cortical progenitors in mammalian evolution may have evolved through epigenetic alterations. However, whether or how the epigenome in cortical progenitors differs between humans and other species is unknown. Here, we report that histone H3 acetylation is a key epigenetic regulation in BP profiling of sorted BPs, we show that H3K9 acetylation is low in murine BPs and high in amplification, neuronal subtype generation and cortical expansion. Through epigenetic profiling of sorted BPs, we show that H3K9 acetylation is low in murine BPs and high in human BPs. Elevated H3K9ac preferentially increases BP proliferation, increasing the size and folding of the normally smooth mouse neocortex. Furthermore, we found that the elevated H3 acetylation activates expression of IN genes in in developing mouse cortex and promote proliferation of IN progenitor-like cells in cortex of Pax6 mutant mouse models. Mechanistically, H3K9ac drives the BP amplification and proliferation of these IN progenitor-like cells by increasing expression of the evolutionarily regulated gene, TRNP1. Our findings demonstrate a previously unknown mechanism that controls neocortex expansion and generation of neuronal subtypes. Keywords: Cortical development, neurogenesis, basal progenitors, cortical size, gyrification, excitatory neuron, inhibitory interneuron, epigenetic profiling, epigenetic regulation, H3 acetylation, H3K9ac, TRNP1, PAX6
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Visualising time in the human brain
We all have a sense of time. Yet it is a particularly intangible sensation. So how is our “sense” of time represented in the brain? Functional neuroimaging studies have consistently identified a network of regions, including Supplementary Motor Area and basal ganglia, that are activated when participants make judgements about the duration of currently unfolding events. In parallel, left parietal cortex and cerebellum are activated when participants predict when future events are likely to occur. These structures are activated by temporal processing even when task goals are purely perceptual. So why should the perception of time be represented in regions of the brain that have more traditionally been implicated in motor function? One possibility is that we learn about time through action. In other words, action could provide the functional scaffolding for learning about time in childhood, explaining why it has come to be represented in motor circuits of the adult brain.
Analogical Reasoning with Neuro-Symbolic AI
Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.
What is Cognitive Neuropsychology Good For? An Unauthorized Biography
Abstract: There is no doubt that the study of brain damaged individuals has contributed greatly to our understanding of the mind/brain. Within this broad approach, cognitive neuropsychology accentuates the cognitive dimension: it investigates the structure and organization of perceptual, motor, cognitive, and language systems – prerequisites for understanding the functional organization of the brain – through the analysis of their dysfunction following brain damage. Significant insights have come specifically from this paradigm. But progress has been slow and enthusiasm for this approach has waned somewhat in recent years, and the use of existing findings to constrain new theories has also waned. What explains the current diminished status of cognitive neuropsychology? One reason may be failure to calibrate expectations about the effective contribution of different subfields of the study of the mind/brain as these are determined by their natural peculiarities – such factors as the types of available observations and their complexity, opportunity of access to such observations, the possibility of controlled experimentation, and the like. Here, I also explore the merits and limitations of cognitive neuropsychology, with particular focus on the role of intellectual, pragmatic, and societal factors that determine scientific practice within the broader domains of cognitive science/neuroscience. I conclude on an optimistic note about the continuing unique importance of cognitive neuropsychology: although limited to the study of experiments of nature, it offers a privileged window into significant aspects of the mind/brain that are not easily accessible through other approaches. Biography: Alfonso Caramazza's research has focussed extensively on how words and their meanings are represented in the brain. His early pioneering studies helped to reformulate our thinking about Broca's aphasia (not limited to production) and formalised the logic of patient-based neuropsychology. More recently he has been instrumental in reconsidering popular claims about embodied cognition.
Do we reason differently about affectively charged analogies? Insights from EEG research
Affectively charged analogies are commonly used in literature and art, but also in politics and argumentation. There are reasons to think we may process these analogies differently. Notably, analogical reasoning is a complex process that requires the use of cognitive resources, which are limited. In the presence of affectively charged content, some of these resources might be directed towards affective processing and away from analogical reasoning. To investigate this idea, I investigated effects of affective charge on differences in brain activity evoked by sound versus unsound analogies. The presentation will detail the methods and results for two such experiments, one in which participants saw analogies formed of neutral and negative words and one in which they were created by combining conditioned symbols. I will also briefly discuss future research aiming to investigate the effects of analogical reasoning on brain activity related to affective processing.
Human memory: mathematical models and experiments
I will present my recent work on mathematical modeling of human memory. I will argue that memory recall of random lists of items is governed by the universal algorithm resulting in the analytical relation between the number of items in memory and the number of items that can be successfully recalled. The retention of items in memory on the other hand is not universal and differs for different types of items being remembered, in particular retention curves for words and sketches is different even when sketches are made to only carry information about an object being drawn. I will discuss the putative reasons for these observations and introduce the phenomenological model predicting retention curves.
Inferring informational structures in neural recordings of drosophila with epsilon-machines
Measuring the degree of consciousness an organism possesses has remained a longstanding challenge in Neuroscience. In part, this is due to the difficulty of finding the appropriate mathematical tools for describing such a subjective phenomenon. Current methods relate the level of consciousness to the complexity of neural activity, i.e., using the information contained in a stream of recorded signals they can tell whether the subject might be awake, asleep, or anaesthetised. Usually, the signals stemming from a complex system are correlated in time; the behaviour of the future depends on the patterns in the neural activity of the past. However these past-future relationships remain either hidden to, or not taken into account in the current measures of consciousness. These past-future correlations are likely to contain more information and thus can reveal a richer understanding about the behaviour of complex systems like a brain. Our work employs the "epsilon-machines” framework to account for the time correlations in neural recordings. In a nutshell, epsilon-machines reveal how much of the past neural activity is needed in order to accurately predict how the activity in the future will behave, and this is summarised in a single number called "statistical complexity". If a lot of past neural activity is required to predict the future behaviour, then can we say that the brain was more “awake" at the time of recording? Furthermore, if we read the recordings in reverse, does the difference between forward and reverse-time statistical complexity allow us to quantify the level of time asymmetry in the brain? Neuroscience predicts that there should be a degree of time asymmetry in the brain. However, this has never been measured. To test this, we used neural recordings measured from the brains of fruit flies and inferred the epsilon-machines. We found that the nature of the past and future correlations of neural activity in the brain, drastically changes depending on whether the fly was awake or anaesthetised. Not only does our study find that wakeful and anaesthetised fly brains are distinguished by how statistically complex they are, but that the amount of correlations in wakeful fly brains was much more sensitive to whether the neural recordings were read forward vs. backwards in time, compared to anaesthetised brains. In other words, wakeful fly brains were more complex, and time asymmetric than anaesthetised ones.
NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks
The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.
NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity
Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.
Cognition is Rhythm
Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (>35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.
Children's relational noun generalization strategies
A common result is that comparison settings (i.e., several stimuli introduced simultaneously) favor conceptualization and generalization. However still little is known of the solving strategies used by children to compare and generalize novel words. Understanding the temporal dynamics of children’s solving strategies may help assess which processes underlie generalization. We tested children in noun and relational noun generalization tasks and collected eye tracking data. To analyze and interpret the data we followed predictions made by existing models of analogical reasoning and generalization. The data reveals clear patterns of exploration in which participants compare learning items before searching for a solution. Analyses of the beginning of trials show that early comparisons favor generalization and that errors may be caused by a lake of early comparison. Children then pursue their search in different ways according to the task. In this presentation I will present the generalization strategies revealed by eye tracking, compare the strategies from both tasks and confront them to existing models.
Gap Junction Coupling between Photoreceptors
Simply put, the goal of my research is to describe the neuronal circuitry of the retina. The organization of the mammalian retina is certainly complex but it is not chaotic. Although there are many cell types, most adhere to a relatively constant morphology and they are distributed in non-random mosaics. Furthermore, each cell type ramifies at a characteristic depth in the retina and makes a stereotyped set of synaptic connections. In other words, these neurons form a series of local circuits across the retina. The next step is to identify the simplest and commonest of these repeating neural circuits. They are the building blocks of retinal function. If we think of it in this way, the retina is a fabulous model for the rest of the CNS. We are interested in identifying specific circuits and cell types that support the different functions of the retina. For example, there appear to be specific pathways for rod and cone mediated vision. Rods are used under low light conditions and rod circuitry is specialized for high sensitivity when photons are scarce (when you’re out camping, starlight). The hallmark of the rod-mediated system is monochromatic vision. In contrast, the cone circuits are specialized for high acuity and color vision under relatively bright or daylight conditions. Individual neurons may be filled with fluorescent dyes under visual control. This is achieved by impaling the cell with a glass microelectrode using a 3D micromanipulator. We are also interested in the diffusion of dye through coupled neuronal networks in the retina. The dye filled cells are also combined with antibody labeling to reveal neuronal connections and circuits. This triple-labeled material may be viewed and reconstructed in 3 dimensions by multi-channel confocal microscopy. We have our own confocal microscope facility in the department and timeslots are available to students in my lab.
Statistical Summary Representations in Identity Learning: Exemplar-Independent Incidental Recognition
The literature suggests that ensemble coding, the ability to represent the gist of sets, may be an underlying mechanism for becoming familiar with newly encountered faces. This phenomenon was investigated by introducing a new training paradigm that involves incidental learning of target identities interspersed among distractors. The effectiveness of this training paradigm was explored in Study 1, which revealed that unfamiliar observers who learned the faces incidentally performed just as well as the observers who were instructed to learn the faces, and the intervening distractors did not disrupt familiarization. Using the same training paradigm, ensemble coding was investigated as an underlying mechanism for face familiarization in Study 2 by measuring familiarity with the targets at different time points using average images created either by seen or unseen encounters of the target. The results revealed that observers whose familiarity was tested using seen averages outperformed the observers who were tested using unseen averages, however, this discrepancy diminished over time. In other words, successful recognition of the target faces became less reliant on the previously encountered exemplars over time, suggesting an exemplar-independent representation that is likely achieved through ensemble coding. Taken together, the results from the current experiment provide direct evidence for ensemble coding as a viable underlying mechanism for face familiarization, that faces that are interspersed among distractors can be learned incidentally.
Investigating visual recognition and the temporal lobes using electrophysiology and fast periodic visual stimulation
The ventral visual pathway extends from the occipital to the anterior temporal regions, and is specialized in giving meaning to objects and people that are perceived through vision. Numerous studies in functional magnetic resonance imaging have focused on the cerebral basis of visual recognition. However, this technique is susceptible to magnetic artefacts in ventral anterior temporal regions and it has led to an underestimation of the role of these regions within the ventral visual stream, especially with respect to face recognition and semantic representations. Moreover, there is an increasing need for implicit methods assessing these functions as explicit tasks lack specificity. In this talk, I will present three studies using fast periodic visual stimulation (FPVS) in combination with scalp and/or intracerebral EEG to overcome these limitations and provide high SNR in temporal regions. I will show that, beyond face recognition, FPVS can be extended to investigate semantic representations using a face-name association paradigm and a semantic categorisation paradigm with written words. These results shed new light on the role of temporal regions and demonstrate the high potential of the FPVS approach as a powerful electrophysiological tool to assess various cognitive functions in neurotypical and clinical populations.
Co-tuned, balanced excitation and inhibition in olfactory memory networks
Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.
Theory-driven probabilistic modeling of language use: a case study on quantifiers, logic and typicality
Theoretical linguistics postulates abstract structures that successfully explain key aspects of language. However, the precise relation between abstract theoretical ideas and empirical data from language use is not always apparent. Here, we propose to empirically test abstract semantic theories through the lens of probabilistic pragmatic modelling. We consider the historically important case of quantity words (e.g., `some', `all'). Data from a large-scale production study seem to suggest that quantity words are understood via prototypes. But based on statistical and empirical model comparison, we show that a probabilistic pragmatic model that embeds a strict truth-conditional notion of meaning explains the data just as well as a model that encodes prototypes into the meaning of quantity words.
Synaesthesia as a Model System for Understanding Variation in the Human Mind and Brain
During this talk, I will seek to reposition synaesthesia as model system for understanding variation in the construction of the human mind and brain. People with synaesthesia inhabit a remarkable mental world in which numbers can be coloured, words can have tastes, and music is a visual spectacle. Synaesthesia has now been documented for over two hundred years but key questions remain unanswered about why it exists, and what such conditions might mean for theories of the human mind. I will argue that we need to rethink synaesthesia as not just representing exceptional experiences, but as a product of an unusual neurodevelopmental cascade from genes to brain to cognition of which synaesthesia is only one outcome. Rather than synaesthesia being a kind of 'dangling qualia' (atypical experiences attached to a typical mind/brain) it should be thought of as unusual experiences that accompany an unusual mind/brain. Specifically, differences in the brains of synaesthetes support a distinctive way of thinking (enhanced memory, imagery etc.) and may also predispose towards particular clinical vulnerabilities. It is this neurodiverse phenotype that is an important object of study in its own right and may explain any adaptive value for having synaesthesia.
Blurring the boundaries between neuroscience and organismal physiology
Work in my laboratory is based on the assumptions that we do not know yet how all physiological functions are regulated and that mouse genetics by allowing to identify novel inter-organ communications is the most efficient ways to identify novel regulation of physiological functions. We test these two contention through the study of bone which is the organ my lab has studied since its inception. Based on precise cell biological and clinical reasons that will be presented during the seminar we hypothesized that bone should be a regulator of energy metabolism and reproduction and identified a bone-derived hormone termed osteocalcin that is responsible of these regulatory events. The study of this hormone revealed that in addition to its predicted functions it also regulates brain size, hippocampus development, prevents anxiety and depression and favors spatial learning and memory by signaling through a specific receptor we characterized. As will be presented, we elucidated some of the molecular events accounting for the influence of osteocalcin on brain and showed that maternal osteocalcin is the pool of this hormone that affects brain development. Subsequently and looking at all the physiological functions regulated by osteocalcin, i.e., memory, the ability to exercise, glucose metabolism, the regulation of testosterone biosynthesis, we realized that are all need or regulated in the case of danger. In other words it suggested that osteocalcin is an hormone needed to sense and overcome acute danger. Consonant with this hypothesis we next showed this led us to demonstrate that bone via osteocalcin is needed to mount an acute stress response through molecular and cellular mechanisms that will be presented during the seminar. overall, an evolutionary appraisal of bone biology, this body of work and experiments ongoing in the lab concur to suggest 1] the appearance of bone during evolution has changed how physiological functions as diverse as memory, the acute stress response but also exercise and glucose metabolism are regulated and 2] identified bone and osteocalcin as its molecular vector, as an organ needed to sense and response to danger.
A Rare Visuospatial Disorder
Cases with visuospatial abnormalities provide opportunities for understanding the underlying cognitive mechanisms. Three cases of visual mirror-reversal have been reported: AH (McCloskey, 2009), TM (McCloskey, Valtonen, & Sherman, 2006) and PR (Pflugshaupt et al., 2007). This research reports a fourth case, BS -- with focal occipital cortical dysgenesis -- who displays highly unusual visuospatial abnormalities. They initially produced mirror reversal errors similar to those of AH, who -- like the patient in question -- showed a selective developmental deficit. Extensive examination of BS revealed phenomena such as: mirror reversal errors (sometimes affecting only parts of the visual fields) in both horizontal and vertical planes; subjective representation of visual objects and words in distinct left and right visual fields; subjective duplication of objects of visual attention (not due to diplopia); uncertainty regarding the canonical upright orientation of everyday objects; mirror reversals during saccadic eye movements on oculomotor tasks; and failure to integrate visual with other sensory inputs (e.g., they feel themself moving backwards when visual information shows they are moving forward). Fewer errors are produced under conditions of certain visual variables. These and other findings have led the researchers to conclude that BS draws upon a subjective representation of visual space that is structured phenomenally much as it is anatomically in early visual cortex (i.e., rotated through 180 degrees, split into left and right fields, etc.). Despite this, BS functions remarkably well in their everyday life, apparently due to extensive compensatory mechanisms deployed at higher (executive) processing levels beyond the visual modality.
The active modulation of sound and vibration perception
The dominant view of perception right now is that information travels from the environment to the sensory system, then to the nervous systems which processes it to generate a percept and behaviour. Ongoing behaviour is thought to occur largely through simple iterations of this process. However, this linear view, where information flows only in one direction and the properties of the environment and the sensory system remain static and unaffected by behaviour, is slowly fading. Many of us are beginning to appreciate that perception is largely active, i.e. that information flows back and forth between the three systems modulating their respective properties. In other words, in the real world, the environment and sensorimotor loop is pretty much always closed. I study the loop; in particular I study how the reverse arm of the loop affects sound and vibration perception. I will present two examples of motor modulation of perception at two very different temporal and spatial scales. First, in crickets, I will present data on how high-speed molecular motor activity enhances hearing via the well-studied phenomenon of active amplification. Second, in spiders I will present data on how body posture, a slow macroscopic feature, which can barely be called ‘active’, can nonetheless modulate vibration perception. I hope these results will motivate a conversation about whether ‘active’ perception is an optional feature observed in some sensory systems, or something that is ultimately necessitated by both evolution and physics.
Working Memory 2.0
Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (> 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.
Semantic Embodiment: Decoding Action Words through Topographic Neuronal Representation with Brain-Constrained Network
Bernstein Conference 2024