Knowledge
knowledge
Decoding stress vulnerability
Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.
From Spiking Predictive Coding to Learning Abstract Object Representation
In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.
Structural & Functional Neuroplasticity in Children with Hemiplegia
About 30% of children with cerebral palsy have congenital hemiplegia, resulting from periventricular white matter injury, which impairs the use of one hand and disrupts bimanual co-ordination. Congenital hemiplegia has a profound effect on each child's life and, thus, is of great importance to the public health. Changes in brain organization (neuroplasticity) often occur following periventricular white matter injury. These changes vary widely depending on the timing, location, and extent of the injury, as well as the functional system involved. Currently, we have limited knowledge of neuroplasticity in children with congenital hemiplegia. As a result, we provide rehabilitation treatment to these children almost blindly based exclusively on behavioral data. In this talk, I will present recent research evidence of my team on understanding neuroplasticity in children with congenital hemiplegia by using a multimodal neuroimaging approach that combines data from structural and functional neuroimaging methods. I will further present preliminary data regarding functional improvements of upper extremities motor and sensory functions as a result of rehabilitation with a robotic system that involves active participation of the child in a video-game setup. Our research is essential for the development of novel or improved neurological rehabilitation strategies for children with congenital hemiplegia.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Gut/Body interactions in health and disease
The adult intestine is a major barrier epithelium and coordinator of multi-organ functions. Stem cells constantly repair the intestinal epithelium by adjusting their proliferation and differentiation to tissue intrinsic as well as micro- and macro-environmental signals. How these signals integrate to control intestinal and whole-body homeostasis is largely unknown. Addressing this gap in knowledge is central to an improved understanding of intestinal pathophysiology and its systemic consequences. Combining Drosophila and mammalian model systems my laboratory has discovered fundamental mechanisms driving intestinal regeneration and tumourigenesis and outlined complex inter-organ signaling regulating health and disease. During my talk, I will discuss inter-related areas of research from my lab, including:1- Interactions between the intestine and its microenvironment influencing intestinal regeneration and tumourigenesis. 2- Long-range signals from the intestine impacting whole-body in health and disease.
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Walk the talk: concrete actions to promote diversity in neuroscience in Latin America
Building upon the webinar "What are the main barriers to succeed in brain sciences in Latin America?" (February 2021) and the paper "Addressing the opportunity gap in the Latin American neuroscience community" (Silva, A., Iyer, K., Cirulli, F. et al. Nat Neurosci August 2022), this ALBA-IBRO Webinar is the next chapter in our journey towards fostering inclusivity and diversity in neuroscience in Latin America. The webinar is designed to go beyond theoretical discussions and provide tangible solutions. We will showcase 3-4 best practice case studies, shining a spotlight on real-life actions and campaigns implemented at the institutional level, be it within government bodies, universities, or other organisations. Our goal is to empower neuroscientists across Latin America by equipping them with practical knowledge they can apply in their own institutions and countries.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Learning through the eyes and ears of a child
Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.
The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures
What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.
Bernstein Student Workshop Series
The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.
Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines
Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.
Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach
The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).
Integration of 3D human stem cell models derived from post-mortem tissue and statistical genomics to guide schizophrenia therapeutic development
Schizophrenia is a neuropsychiatric disorder characterized by positive symptoms (such as hallucinations and delusions), negative symptoms (such as avolition and withdrawal) and cognitive dysfunction1. Schizophrenia is highly heritable, and genetic studies are playing a pivotal role in identifying potential biomarkers and causal disease mechanisms with the hope of informing new treatments. Genome-wide association studies (GWAS) identified nearly 270 loci with a high statistical association with schizophrenia risk; however each locus confers only a small increase in risk therefore it is difficult to translate these findings into understanding disease biology that can lead to treatments. Induced pluripotent stem cell (iPSC) models are a tractable system to translate genetic findings and interrogate mechanisms of pathogenesis. Mounting research with patient-derived iPSCs has proposed several neurodevelopmental pathways altered in SCZ, such as neural progenitor cell (NPC) proliferation, imbalanced differentiation of excitatory and inhibitory cortical neurons. However, it is unclear what exactly these iPS models recapitulate, how potential perturbations of early brain development translates into illness in adults and how iPS models that represent fetal stages can be utilized to further drug development efforts to treat adult illness. I will present the largest transcriptome analysis of post-mortem caudate nucleus in schizophrenia where we discovered that decreased presynaptic DRD2 autoregulation is the causal dopamine risk factor for schizophrenia (Benjamin et al, Nature Neuroscience 2022 https://doi.org/10.1038/s41593-022-01182-7). We developed stem cell models from a subset of the postmortem cohort to better understand the molecular underpinnings of human psychiatric disorders (Sawada et al, Stem Cell Research 2020). We established a method for the differentiation of iPS cells into ventral forebrain organoids and performed single cell RNAseq and cellular phenotyping. To our knowledge, this is the first study to evaluate iPSC models of SZ from the same individuals with postmortem tissue. Our study establishes that striatal neurons in the patients with SCZ carry abnormalities that originated during early brain development. Differentiation of inhibitory neurons is accelerated whereas excitatory neuronal development is delayed, implicating an excitation and inhibition (E-I) imbalance during early brain development in SCZ. We found a significant overlap of genes upregulated in the inhibitory neurons in SCZ organoids with upregulated genes in postmortem caudate tissues from patients with SCZ compared with control individuals, including the donors of our iPS cell cohort. Altogether, we demonstrate that ventral forebrain organoids derived from postmortem tissue of individuals with schizophrenia recapitulate perturbed striatal gene expression dynamics of the donors’ brains (Sawada et al, biorxiv 2022 https://doi.org/10.1101/2022.05.26.493589).
Central place foraging: how insects anchor spatial information
Many insect species maintain a nest around which their foraging behaviour is centered, and can use path integration to maintain an accurate estimate of their distance and direction (a vector) to their nest. Some species, such as bees and ants, can also store the vector information for multiple salient locations in the world, such as food sources, in a common coordinate system. They can also use remembered views of the terrain around salient locations or along travelled routes to guide return. Recent modelling of these abilities shows convergence on a small set of algorithms and assumptions that appear sufficient to account for a wide range of behavioural data, and which can be mapped to specific insect brain circuits. Notably, this does not include any significant topological knowledge: the insect does not need to recover the information (implicit in their vector memory) about the relationships between salient places; nor to maintain any connectedness or ordering information between view memories; nor to form any associations between views and vectors. However, there remains some experimental evidence not fully explained by these algorithms that may point towards the existence of a more complex or integrated mental map in insects.
Cognitive supports for analogical reasoning in rational number understanding
In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.
Bridging clinical and cognitive neuroscience together to investigate semantics, above and beyond language
We will explore how neuropsychology can be leveraged to directly test cognitive neuroscience theories using the case of frontotemporal dementias affecting the language network. Specifically, we will focus on pathological, neuroimaging, and cognitive data from primary progressive aphasia. We will see how they can help us investigate the reading network, semantic knowledge organisation, and grammatical categories processing. Time permitting, the end of the talk will cover the temporal dynamics of semantic dimensions recovery and the role played by the task.
Meta-learning functional plasticity rules in neural networks
Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.
Do large language models solve verbal analogies like children do?
Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
NEW TREATMENTS FOR PAIN: Unmet needs and how to meet them
“Of pain you could wish only one thing: that it should stop. Nothing in the world was so bad as physical pain. In the face of pain there are no heroes.- George Orwell, ‘1984’ " "Neuroscience has revealed the secrets of the brain and nervous system to an extent that was beyond the realm of imagination just 10-20 years ago, let alone in 1949 when Orwell wrote his prophetic novel. Understanding pain, however, presents a unique challenge to academia, industry and medicine, being both a measurable physiological process as well as deeply personal and subjective. Given the millions of people who suffer from pain every day, wishing only, “that it should stop”, the need to find more effective treatments cannot be understated." "‘New treatments for pain’ will bring together approximately 120 people from the commercial, academic, and not-for-profit sectors to share current knowledge, identify future directions, and enable collaboration, providing delegates with meaningful and practical ways to accelerate their own work into developing treatments for pain.
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
Glial and Neuronal Biology of the Aging Brain Symposium, Alana Down Syndrome Center and Aging Brain Initiative at Picower, MIT
The Aging Brain Initiative (ABI) is an interdisciplinary effort by MIT focusing on understanding neurodegeneration and discovery efforts to find hallmarks of aging, both in health and disease." "The Alana Down Syndrome Center (ADSC) aims to deepen knowledge about Down syndrome and to improve health, autonomy and inclusion of people with this genetic condition." "The ABI and the ADSC have joined forces for this year's symposium to highlight how aging-related changes to the brain overlap with neurological aspects of Down syndrome. Our hope is to encourage greater collaboration between the brain aging and Down syndrome research communities.
Glial and Neuronal Biology of the Aging Brain Symposium, Alana Down Syndrome Center and Aging Brain Initiative at Picower, MIT
The Aging Brain Initiative (ABI) is an interdisciplinary effort by MIT focusing on understanding neurodegeneration and discovery efforts to find hallmarks of aging, both in health and disease." "The Alana Down Syndrome Center (ADSC) aims to deepen knowledge about Down syndrome and to improve health, autonomy and inclusion of people with this genetic condition." "The ABI and the ADSC have joined forces for this year's symposium to highlight how aging-related changes to the brain overlap with neurological aspects of Down syndrome. Our hope is to encourage greater collaboration between the brain aging and Down syndrome research communities.
Is Theory of Mind Analogical? Evidence from the Analogical Theory of Mind cognitive model
Theory of mind, which consists of reasoning about the knowledge, belief, desire, and similar mental states of others, is a key component of social reasoning and social interaction. While it has been studied by cognitive scientists for decades, none of the prevailing theories of the processes that underlie theory of mind reasoning and development explain the breadth of experimental findings. I propose that this is because theory of mind is, like much of human reasoning, inherently analogical. In this talk, I will discuss several theory of mind findings from the psychology literature, the challenges they pose for our understanding of theory of mind, and bring in evidence from the Analogical Theory of Mind (AToM) cognitive model that demonstrates how these findings fit into an analogical understanding of theory of mind reasoning.
Chandelier cells shine a light on the emergence of GABAergic circuits in the cortex
GABAergic interneurons are chiefly responsible for controlling the activity of local circuits in the cortex. Chandelier cells (ChCs) are a type of GABAergic interneuron that control the output of hundreds of neighbouring pyramidal cells through axo-axonic synapses which target the axon initial segment (AIS). Despite their importance in modulating circuit activity, our knowledge of the development and function of axo-axonic synapses remains elusive. We have investigated the emergence and plasticity of axo-axonic synapses in layer 2/3 of the somatosensory cortex (S1) and found that ChCs follow what appear to be homeostatic rules when forming synapses with pyramidal neurons. We are currently implementing in vivo techniques to image the process of axo-axonic synapse formation during development and uncover the dynamics of synaptogenesis and pruning at the AIS. In addition, we are using an all-optical approach to both activate and measure the activity of chandelier cells and their postsynaptic partners in the primary visual cortex (V1) and somatosensory cortex (S1) in mice, also during development. We aim to provide a structural and functional description of the emergence and plasticity of a GABAergic synapse type in the cortex.
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
Exploration-Based Approach for Computationally Supported Design-by-Analogy
Engineering designers practice design-by-analogy (DbA) during concept generation to retrieve knowledge from external sources or memory as inspiration to solve design problems. DbA is a tool for innovation that involves retrieving analogies from a source domain and transferring the knowledge to a target domain. While DbA produces innovative results, designers often come up with analogies by themselves or through serendipitous, random encounters. Computational support systems for searching analogies have been developed to facilitate DbA in systematic design practice. However, many systems have focused on a query-based approach, in which a designer inputs a keyword or a query function and is returned a set of algorithmically determined stimuli. In this presentation, a new analogical retrieval process that leverages a visual interaction technique is introduced. It enables designers to explore a space of analogies, rather than be constrained by what’s retrieved by a query-based algorithm. With an exploration-based DbA tool, designers have the potential to uncover more useful and unexpected inspiration for innovative design solutions.
A Game Theoretical Framework for Quantifying Causes in Neural Networks
Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.
Where do problem spaces come from? On metaphors and representational change
The challenges of problem solving do not exclusively lie in how to perform heuristic search, but they begin with how we understand a given task: How to cognitively represent the task domain and its components can determine how quickly someone is able to progress towards a solution, whether advanced strategies can be discovered, or even whether a solution is found at all. While this challenge of constructing and changing representations has been acknowledged early on in problem solving research, for the most part it has been sidestepped by focussing on simple, well-defined problems whose representation is almost fully determined by the task instructions. Thus, the established theory of problem solving as heuristic search in problem spaces has little to say on this. In this talk, I will present a study designed to explore this issue, by virtue of finding and refining an adequate problem representation being its main challenge. In this exploratory case study, it was investigated how pairs of participants acquaint themselves with a complex spatial transformation task in the domain of iterated mental paper folding over the course of several days. Participants have to understand the geometry of edges which occurs when repeatedly mentally folding a sheet of paper in alternating directions without the use of external aids. Faced with the difficulty of handling increasingly complex folds in light of limited cognitive capacity, participants are forced to look for ways in which to represent folds more efficiently. In a qualitative analysis of video recordings of the participants' behaviour, the development of their conceptualisation of the task domain was traced over the course of the study, focussing especially on their use of gesture and the spontaneous occurrence and use of metaphors in the construction of new representations. Based on these observations, I will conclude the talk with several theoretical speculations regarding the roles of metaphor and cognitive capacity in representational change.
On the contributions of retinal direction selectivity to cortical motion processing in mice
Cells preferentially responding to visual motion in a particular direction are said to be direction-selective, and these were first identified in the primary visual cortex. Since then, direction-selective responses have been observed in the retina of several species, including mice, indicating motion analysis begins at the earliest stage of the visual hierarchy. Yet little is known about how retinal direction selectivity contributes to motion processing in the visual cortex. In this talk, I will present our experimental efforts to narrow this gap in our knowledge. To this end, we used genetic approaches to disrupt direction selectivity in the retina and mapped neuronal responses to visual motion in the visual cortex of mice using intrinsic signal optical imaging and two-photon calcium imaging. In essence, our work demonstrates that direction selectivity computed at the level of the retina causally serves to establish specialized motion responses in distinct areas of the mouse visual cortex. This finding thus compels us to revisit our notions of how the brain builds complex visual representations and underscores the importance of the processing performed in the periphery of sensory systems.
The evolution of computation in the brain: Insights from studying the retina
The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?
Molecular Logic of Synapse Organization and Plasticity
Connections between nerve cells called synapses are the fundamental units of communication and information processing in the brain. The accurate wiring of neurons through synapses into neural networks or circuits is essential for brain organization. Neuronal networks are sculpted and refined throughout life by constant adjustment of the strength of synaptic communication by neuronal activity, a process known as synaptic plasticity. Deficits in the development or plasticity of synapses underlie various neuropsychiatric disorders, including autism, schizophrenia and intellectual disability. The Siddiqui lab research program comprises three major themes. One, to assess how biochemical switches control the activity of synapse organizing proteins, how these switches act through their binding partners and how these processes are regulated to correct impaired synaptic function in disease. Two, to investigate how synapse organizers regulate the specificity of neuronal circuit development and how defined circuits contribute to cognition and behaviour. Three, to address how synapses are formed in the developing brain and maintained in the mature brain and how microcircuits formed by synapses are refined to fine-tune information processing in the brain. Together, these studies have generated fundamental new knowledge about neuronal circuit development and plasticity and enabled us to identify targets for therapeutic intervention.
The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)
Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.
Synthetic and natural images unlock the power of recurrency in primary visual cortex
During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.
The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules
Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.
Mapping the Dynamics of the Linear and 3D Genome of Single Cells in the Developing Brain
Three intimately related dimensions of the mammalian genome—linear DNA sequence, gene transcription, and 3D genome architecture—are crucial for the development of nervous systems. Changes in the linear genome (e.g., de novo mutations), transcriptome, and 3D genome structure lead to debilitating neurodevelopmental disorders, such as autism and schizophrenia. However, current technologies and data are severely limited: (1) 3D genome structures of single brain cells have not been solved; (2) little is known about the dynamics of single-cell transcriptome and 3D genome after birth; (3) true de novo mutations are extremely difficult to distinguish from false positives (DNA damage and/or amplification errors). Here, I filled in this longstanding technological and knowledge gap. I recently developed a high-resolution method—diploid chromatin conformation capture (Dip-C)—which resolved the first 3D structure of the human genome, tackling a longstanding problem dating back to the 1880s. Using Dip-C, I obtained the first 3D genome structure of a single brain cell, and created the first transcriptome and 3D genome atlas of the mouse brain during postnatal development. I found that in adults, 3D genome “structure types” delineate all major cell types, with high correlation between chromatin A/B compartments and gene expression. During development, both transcriptome and 3D genome are extensively transformed in the first month of life. In neurons, 3D genome is rewired across scales, correlated with gene expression modules, and independent of sensory experience. Finally, I examined allele-specific structure of imprinted genes, revealing local and chromosome-wide differences. More recently, I expanded my 3D genome atlas to the human and mouse cerebellum—the most consistently affected brain region in autism. I uncovered unique 3D genome rewiring throughout life, providing a structural basis for the cerebellum’s unique mode of development and aging. In addition, to accurately measure de novo mutations in a single cell, I developed a new method—multiplex end-tagging amplification of complementary strands (META-CS), which eliminates nearly all false positives by virtue of DNA complementarity. Using META-CS, I determined the true mutation spectrum of single human brain cells, free from chemical artifacts. Together, my findings uncovered an unknown dimension of neurodevelopment, and open up opportunities for new treatments for autism and other developmental disorders.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Analogical Reasoning with Neuro-Symbolic AI
Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.
Towards a More Authentic Vision of the (multi)Coding Potential of RNA
Ten of thousands of open reading frames (ORFs) are hidden within transcripts. They have eluded annotations because they are either small or within unsuspected locations. These are named alternative ORFs (altORFs) or small ORFs and have recently been highlighted by innovative proteogenomic approaches, such as our OpenProt resource, revealing their existence and implications in biological functions. Due to the absence of altORFs from annotations, pathogenic mutations within these are being ignored. I will discuss our latest progress on the re-analysis of large-scale proteomics datasets to improve our knowledge of proteomic diversity, and the functional characterization of a second protein coded by the FUS gene. Finally, I will explain the need to map the coding potential of the transcriptome using artificial intelligence rather than with conventional annotations that do not capture the full translational activity of ribosomes.
The Limits of Causal Reasoning in Human and Machine Learning
A key purpose of causal reasoning by individuals and by collectives is to enhance action, to give humans yet more control over their environment. As a result, causal reasoning serves as the infrastructure of both thought and discourse. Humans represent causal systems accurately in some ways, but also show some systematic biases (we tend to neglect causal pathways other than the one we are thinking about). Even when accurate, people’s understanding of causal systems tends to be superficial; we depend on our communities for most of our causal knowledge and reasoning. Nevertheless, we are better causal reasoners than machines. Modern machine learners do not come close to matching human abilities.
Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?
While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.
Consciousness and implicit learning
Can we learn without conscious awareness? Numerous evidences in the research of implicit learning have indicated that people can learn the statistical structure of the stimuli but seemingly without any awareness of its underlying rules. However, it remains unclear what types of knowledge can be learned in implicit learning, what is the relationship between conscious and unconscious knowledge, and what are the neural substrates for the acquisition of conscious and unconscious knowledge. In this talk, I will discuss with you about these ongoing questions.
Scaffolding up from Social Interactions: A proposal of how social interactions might shape learning across development
Social learning and analogical reasoning both provide exponential opportunities for learning. These skills have largely been studied independently, but my future research asks how combining skills across previously independent domains could add up to more than the sum of their parts. Analogical reasoning allows individuals to transfer learning between contexts and opens up infinite opportunities for innovation and knowledge creation. Its origins and development, so far, have largely been studied in purely cognitive domains. Constraining analogical development to non-social domains may mistakenly lead researchers to overlook its early roots and limit ideas about its potential scope. Building a bridge between social learning and analogy could facilitate identification of the origins of analogical reasoning and broaden its far-reaching potential. In this talk, I propose that the early emergence of social learning, its saliency, and its meaningful context for young children provides a springboard for learning. In addition to providing a strong foundation for early analogical reasoning, the social domain provides an avenue for scaling up analogies in order to learn to learn from others via increasingly complex and broad routes.
Astrocytes and oxytocin interaction regulates amygdala neuronal network activity and related behaviors”
Oxytocin orchestrates social and emotional behaviors through modulation of neural circuits in brain structures such as the central amygdala (CeA). In this structure, the release of oxytocin modulates inhibitory circuits and subsequently suppresses fear responses and decreases anxiety levels. Using astrocyte-specific gain and loss of function approaches and pharmacology, we demonstrate that oxytocin signaling in the central amygdala relies on a subpopulation of astrocytes that represent a prerequisite for proper function of CeA circuits and adequate behavioral responses, both in rats and mice. Our work identifies astrocytes as crucial cellular intermediaries of oxytocinergic modulation in emotional behaviors related to anxiety or positive reinforcement. To our knowledge, this is the first demonstration of a direct role of astrocytes in oxytocin signaling and challenges the long-held dogma that oxytocin signaling occurs exclusively via direct action on neurons in the central nervous system.
NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks
The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.
Computational Principles of Event Memory
Our ability to understand ongoing events depends critically on general knowledge about how different kinds of situations work (schemas), and also on recollection of specific instances of these situations that we have previously experienced (episodic memory). The consensus around this general view masks deep questions about how these two memory systems interact to support event understanding: How do we build our library of schemas? and how exactly do we use episodic memory in the service of event understanding? Given rich, continuous inputs, when do we store and retrieve episodic memory “snapshots”, and how are they organized so as to ensure that we can retrieve the right snapshots at the right time? I will develop predictions about how these processes work using memory augmented neural networks (i.e., neural networks that learn how to use episodic memory in the service of task performance), and I will present results from relevant fMRI and behavioral studies.
GuPPy, a Python toolbox for the analysis of fiber photometry data
Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.
Novel word generalization in comparison designs: How do young children align stimuli when they learn object nouns and relational nouns?
It is well established that the opportunity to compare learning stimuli in a novel word learning/extension task elicits a larger number of conceptually relevant generalizations than standard no-comparison conditions. I will present results suggesting that the effectiveness of comparison depends on factors such as semantic distance, number of training items, dimension distinctiveness and interactions with age. I will address these issues in the case of familiar and unfamiliar object nouns and relational nouns. The alignment strategies followed by children during learning and at test (i.e., when learning items are compared and how children reach a solution) will be described with eye-tracking data. We will also assess the extent to which children’s performance in these tasks are associated with executive functions (inhibition and flexibility) and world knowledge. Finally, we will consider these issues in children with cognitive deficits (Intellectual deficiency, DLD)
Second National Training Course on Sleep Medicine
Many patients presenting to neurology either have primary sleep disorders or suffer from sleep comorbidity. Knowledge on the diagnosis, differential diagnostic considerations, and management of these disorders is therefore mandatory for the general neurologist. This comprehensive course may serve to fulfill part of the preparation requirements for trainees seeking to complete the Royal College Examinations in Neurology. This training course is for R4 and R5 residents in Canadian neurology training programs as well as neurologists.
The influence of menstrual cycle on the indices of cortical excitability
Menstruation is a normal physiological process in women occurring as a result of changes in two ovarian produced hormones – estrogen and progesterone. As a result of these fluctuations, women experience different symptoms in their bodies – their immune system changes (Sekigawa et al, 2004), there are changes in their cardiovascular and digestive system (Millikan, 2006), as well as skin (Hall and Phillips, 2005). But these hormone fluctuations produce major changes in their behavioral pattern as well causing: anxiety, sadness, heightened irritability and anger (Severino and Moline, 1995) which is usually classified as premenstrual syndrome (PMS). In some cases these symptoms severely impair women’s lives and professional help is required. The official diagnosis according to DSM-5 (2013) is premenstrual dysphoric disorder (PMDD). Despite its ubiquitous presence the origins of PMS and PMDD are poorly understood. Some efforts to understand the underlying brain state during the menstruation cycle were performed by using TMS (Smith et al, 1999; 2002; 2003; Inghilleri et al, 2004; Hausmann et al, 2006). But all of these experiments suffer from major shortcomings - no control groups and small number of subjects. Our plan is to address all of these shortcomings and make this the biggest (to our knowledge) experiment of its kind which will, hopefully, provide us with some much needed answers.
Neural mechanisms of altered states of consciousness under psychedelics
Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.
Visual Decisions in Natural Action
Natural behavior reveals the way that gaze serves the needs of the current task, and the complex cognitive control mechanisms that are involved. It has become increasingly clear that even the simplest actions involve complex decision processes that depend on an interaction of visual information, knowledge of the current environment, and the intrinsic costs and benefits of actions choices. I will explore these ideas in the context of walking in natural terrain, where we are able to recover the 3D structure of the visual environment. We show that subjects choose flexible paths that depend on the flatness of the terrain over the next few steps. Subjects trade off flatness with straightness of their paths towards the goal, indicating a nuanced trade-off between stability and energetic costs on both the time scale of the next step and longer-range constraints.
Seeing things clearly: Image understanding through hard-attention and reasoning with structured knowledges
In this talk, Jonathan aims to frame the current challenges of explainability and understanding in ML-driven approaches to image processing, and their potential solution through explicit inference techniques.
StereoSpike: Depth Learning with a Spiking Neural Network
Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to train StereoSpike in a supervised manner, using surrogate gradient descent. We propose a novel readout paradigm to obtain a dense analog prediction –the depth of each pixel– from the spikes of the decoder. We demonstrate that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy. To the best of our knowledge, it is the first time that such a large-scale regression problem is solved by a fully spiking network. Finally, we show that low firing rates (<10%) can be obtained via regularization, with a minimal cost in accuracy. This means that StereoSpike could be implemented efficiently on neuromorphic chips, opening the door for low power real time embedded systems.
Spike-based embeddings for multi-relational graph data
A rich data representation that finds wide application in industry and research is the so-called knowledge graph - a graph-based structure where entities are depicted as nodes and relations between them as edges. Complex systems like molecules, social networks and industrial factory systems can be described using the common language of knowledge graphs, allowing the usage of graph embedding algorithms to make context-aware predictions in these information-packed environments.
Learning an environment model in real-time with core knowledge and closed-loop behaviours
Bernstein Conference 2024
Associative memory of structured knowledge
COSYNE 2022
Evolution of neural activity in circuits bridging sensory and abstract knowledge
COSYNE 2022
Revealing latent knowledge in cortical networks during goal-directed learning
COSYNE 2022
Revealing latent knowledge in cortical networks during goal-directed learning
COSYNE 2022
Coordinated geometric representations of learned knowledge in hippocampus and frontal cortex
COSYNE 2023
Neural mechanisms of relational learning and fast knowledge reassembly
COSYNE 2025
Rapid emergence of latent knowledge in the sensory cortex drives learning
COSYNE 2025
Cascading memory search as a bridge between episodic memories and semantic knowledge
FENS Forum 2024
Incorporating new with old knowledge – curricular learning in anterior cingulate cortex
FENS Forum 2024
Measuring integration of novel and pre-existing knowledge in hippocampus and neocortex
FENS Forum 2024
Navigating through the entorhinal cortex: Combining single-cell electrophysiology and RNA sequencing to advance our knowledge on the neuronal architecture
FENS Forum 2024
The short and long of motor practice sessions: Equal performance gains but different “how to” knowledge
FENS Forum 2024