← Back

Knowledge

Topic spotlight
TopicWorld Wide

knowledge

Discover seminars, jobs, and research tagged with knowledge across World Wide.
73 curated items60 Seminars13 ePosters
Updated in 3 months
73 items · knowledge
73 results
SeminarNeuroscience

Structural & Functional Neuroplasticity in Children with Hemiplegia

Christos Papadelis
University of Texas at Arlington
Feb 20, 2025

About 30% of children with cerebral palsy have congenital hemiplegia, resulting from periventricular white matter injury, which impairs the use of one hand and disrupts bimanual co-ordination. Congenital hemiplegia has a profound effect on each child's life and, thus, is of great importance to the public health. Changes in brain organization (neuroplasticity) often occur following periventricular white matter injury. These changes vary widely depending on the timing, location, and extent of the injury, as well as the functional system involved. Currently, we have limited knowledge of neuroplasticity in children with congenital hemiplegia. As a result, we provide rehabilitation treatment to these children almost blindly based exclusively on behavioral data. In this talk, I will present recent research evidence of my team on understanding neuroplasticity in children with congenital hemiplegia by using a multimodal neuroimaging approach that combines data from structural and functional neuroimaging methods. I will further present preliminary data regarding functional improvements of upper extremities motor and sensory functions as a result of rehabilitation with a robotic system that involves active participation of the child in a video-game setup. Our research is essential for the development of novel or improved neurological rehabilitation strategies for children with congenital hemiplegia.

SeminarNeuroscience

Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge

Jorge Almeida
University of Coimbra
Jan 27, 2025

Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.

SeminarNeuroscienceRecording

Characterizing the causal role of large-scale network interactions in supporting complex cognition

Michal Ramot
Weizmann Inst. of Science
May 6, 2024

Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.

SeminarNeuroscience

Gut/Body interactions in health and disease

Julia Cordero
University of Glasgow
Nov 20, 2023

The adult intestine is a major barrier epithelium and coordinator of multi-organ functions. Stem cells constantly repair the intestinal epithelium by adjusting their proliferation and differentiation to tissue intrinsic as well as micro- and macro-environmental signals. How these signals integrate to control intestinal and whole-body homeostasis is largely unknown. Addressing this gap in knowledge is central to an improved understanding of intestinal pathophysiology and its systemic consequences. Combining Drosophila and mammalian model systems my laboratory has discovered fundamental mechanisms driving intestinal regeneration and tumourigenesis and outlined complex inter-organ signaling regulating health and disease. During my talk, I will discuss inter-related areas of research from my lab, including:1- Interactions between the intestine and its microenvironment influencing intestinal regeneration and tumourigenesis. 2- Long-range signals from the intestine impacting whole-body in health and disease.

SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 20, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarNeuroscience

Learning through the eyes and ears of a child

Brenden Lake
NYU
Apr 20, 2023

Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.

SeminarNeuroscienceRecording

Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines

Shiwali Mohan
Palo Alto Research Center
Mar 30, 2023

Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.

SeminarNeuroscience

Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach

Valentina Borghesani
University of Geneva, Switzerland & NCCR Evolving Language
Mar 15, 2023

The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).

SeminarNeuroscience

Integration of 3D human stem cell models derived from post-mortem tissue and statistical genomics to guide schizophrenia therapeutic development

Jennifer Erwin, Ph.D
Lieber Institute for Brain Development; Department of Neurology and Neuroscience; Johns Hopkins University School of Medicine
Mar 14, 2023

Schizophrenia is a neuropsychiatric disorder characterized by positive symptoms (such as hallucinations and delusions), negative symptoms (such as avolition and withdrawal) and cognitive dysfunction1. Schizophrenia is highly heritable, and genetic studies are playing a pivotal role in identifying potential biomarkers and causal disease mechanisms with the hope of informing new treatments. Genome-wide association studies (GWAS) identified nearly 270 loci with a high statistical association with schizophrenia risk; however each locus confers only a small increase in risk therefore it is difficult to translate these findings into understanding disease biology that can lead to treatments. Induced pluripotent stem cell (iPSC) models are a tractable system to translate genetic findings and interrogate mechanisms of pathogenesis. Mounting research with patient-derived iPSCs has proposed several neurodevelopmental pathways altered in SCZ, such as neural progenitor cell (NPC) proliferation, imbalanced differentiation of excitatory and inhibitory cortical neurons. However, it is unclear what exactly these iPS models recapitulate, how potential perturbations of early brain development translates into illness in adults and how iPS models that represent fetal stages can be utilized to further drug development efforts to treat adult illness. I will present the largest transcriptome analysis of post-mortem caudate nucleus in schizophrenia where we discovered that decreased presynaptic DRD2 autoregulation is the causal dopamine risk factor for schizophrenia (Benjamin et al, Nature Neuroscience 2022 https://doi.org/10.1038/s41593-022-01182-7). We developed stem cell models from a subset of the postmortem cohort to better understand the molecular underpinnings of human psychiatric disorders (Sawada et al, Stem Cell Research 2020). We established a method for the differentiation of iPS cells into ventral forebrain organoids and performed single cell RNAseq and cellular phenotyping. To our knowledge, this is the first study to evaluate iPSC models of SZ from the same individuals with postmortem tissue. Our study establishes that striatal neurons in the patients with SCZ carry abnormalities that originated during early brain development. Differentiation of inhibitory neurons is accelerated whereas excitatory neuronal development is delayed, implicating an excitation and inhibition (E-I) imbalance during early brain development in SCZ. We found a significant overlap of genes upregulated in the inhibitory neurons in SCZ organoids with upregulated genes in postmortem caudate tissues from patients with SCZ compared with control individuals, including the donors of our iPS cell cohort. Altogether, we demonstrate that ventral forebrain organoids derived from postmortem tissue of individuals with schizophrenia recapitulate perturbed striatal gene expression dynamics of the donors’ brains (Sawada et al, biorxiv 2022 https://doi.org/10.1101/2022.05.26.493589).

SeminarNeuroscienceRecording

Cognitive supports for analogical reasoning in rational number understanding

Shuyuan Yu
Carleton University
Mar 2, 2023

In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.

SeminarNeuroscienceRecording

Do large language models solve verbal analogies like children do?

Claire Stevenson
University of Amsterdam
Nov 16, 2022

Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

SeminarNeuroscienceRecording

Associative memory of structured knowledge

Julia Steinberg
Princeton University
Oct 25, 2022

A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.

SeminarNeuroscienceRecording

Is Theory of Mind Analogical? Evidence from the Analogical Theory of Mind cognitive model

Irina Rabkina
Occidental College
Sep 29, 2022

Theory of mind, which consists of reasoning about the knowledge, belief, desire, and similar mental states of others, is a key component of social reasoning and social interaction. While it has been studied by cognitive scientists for decades, none of the prevailing theories of the processes that underlie theory of mind reasoning and development explain the breadth of experimental findings. I propose that this is because theory of mind is, like much of human reasoning, inherently analogical. In this talk, I will discuss several theory of mind findings from the psychology literature, the challenges they pose for our understanding of theory of mind, and bring in evidence from the Analogical Theory of Mind (AToM) cognitive model that demonstrates how these findings fit into an analogical understanding of theory of mind reasoning.

SeminarNeuroscience

Chandelier cells shine a light on the emergence of GABAergic circuits in the cortex

Juan Burrone
King’s College London
Sep 27, 2022

GABAergic interneurons are chiefly responsible for controlling the activity of local circuits in the cortex. Chandelier cells (ChCs) are a type of GABAergic interneuron that control the output of hundreds of neighbouring pyramidal cells through axo-axonic synapses which target the axon initial segment (AIS). Despite their importance in modulating circuit activity, our knowledge of the development and function of axo-axonic synapses remains elusive. We have investigated the emergence and plasticity of axo-axonic synapses in layer 2/3 of the somatosensory cortex (S1) and found that ChCs follow what appear to be homeostatic rules when forming synapses with pyramidal neurons. We are currently implementing in vivo techniques to image the process of axo-axonic synapse formation during development and uncover the dynamics of synaptogenesis and pruning at the AIS. In addition, we are using an all-optical approach to both activate and measure the activity of chandelier cells and their postsynaptic partners in the primary visual cortex (V1) and somatosensory cortex (S1) in mice, also during development. We aim to provide a structural and functional description of the emergence and plasticity of a GABAergic synapse type in the cortex.

SeminarNeuroscienceRecording

A model of colour appearance based on efficient coding of natural images

Jolyon Troscianko
University of Exeter
Jul 17, 2022

An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.

SeminarNeuroscienceRecording

Exploration-Based Approach for Computationally Supported Design-by-Analogy

Hyeonik Song
Texas A&M University
Jul 7, 2022

Engineering designers practice design-by-analogy (DbA) during concept generation to retrieve knowledge from external sources or memory as inspiration to solve design problems. DbA is a tool for innovation that involves retrieving analogies from a source domain and transferring the knowledge to a target domain. While DbA produces innovative results, designers often come up with analogies by themselves or through serendipitous, random encounters. Computational support systems for searching analogies have been developed to facilitate DbA in systematic design practice. However, many systems have focused on a query-based approach, in which a designer inputs a keyword or a query function and is returned a set of algorithmically determined stimuli. In this presentation, a new analogical retrieval process that leverages a visual interaction technique is introduced. It enables designers to explore a space of analogies, rather than be constrained by what’s retrieved by a query-based algorithm. With an exploration-based DbA tool, designers have the potential to uncover more useful and unexpected inspiration for innovative design solutions.

SeminarNeuroscienceRecording

A Game Theoretical Framework for Quantifying​ Causes in Neural Networks

Kayson Fakhar​
ICNS Hamburg
Jul 5, 2022

Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.

SeminarNeuroscienceRecording

Where do problem spaces come from? On metaphors and representational change

Benjamin Angerer
Osnabrück University
Jun 15, 2022

The challenges of problem solving do not exclusively lie in how to perform heuristic search, but they begin with how we understand a given task: How to cognitively represent the task domain and its components can determine how quickly someone is able to progress towards a solution, whether advanced strategies can be discovered, or even whether a solution is found at all. While this challenge of constructing and changing representations has been acknowledged early on in problem solving research, for the most part it has been sidestepped by focussing on simple, well-defined problems whose representation is almost fully determined by the task instructions. Thus, the established theory of problem solving as heuristic search in problem spaces has little to say on this. In this talk, I will present a study designed to explore this issue, by virtue of finding and refining an adequate problem representation being its main challenge. In this exploratory case study, it was investigated how pairs of participants acquaint themselves with a complex spatial transformation task in the domain of iterated mental paper folding over the course of several days. Participants have to understand the geometry of edges which occurs when repeatedly mentally folding a sheet of paper in alternating directions without the use of external aids. Faced with the difficulty of handling increasingly complex folds in light of limited cognitive capacity, participants are forced to look for ways in which to represent folds more efficiently. In a qualitative analysis of video recordings of the participants' behaviour, the development of their conceptualisation of the task domain was traced over the course of the study, focussing especially on their use of gesture and the spontaneous occurrence and use of metaphors in the construction of new representations. Based on these observations, I will conclude the talk with several theoretical speculations regarding the roles of metaphor and cognitive capacity in representational change.

SeminarNeuroscience

On the contributions of retinal direction selectivity to cortical motion processing in mice

Rune Nguyen Rasmussen
University of Copenhagen
Jun 9, 2022

Cells preferentially responding to visual motion in a particular direction are said to be direction-selective, and these were first identified in the primary visual cortex. Since then, direction-selective responses have been observed in the retina of several species, including mice, indicating motion analysis begins at the earliest stage of the visual hierarchy. Yet little is known about how retinal direction selectivity contributes to motion processing in the visual cortex. In this talk, I will present our experimental efforts to narrow this gap in our knowledge. To this end, we used genetic approaches to disrupt direction selectivity in the retina and mapped neuronal responses to visual motion in the visual cortex of mice using intrinsic signal optical imaging and two-photon calcium imaging. In essence, our work demonstrates that direction selectivity computed at the level of the retina causally serves to establish specialized motion responses in distinct areas of the mouse visual cortex. This finding thus compels us to revisit our notions of how the brain builds complex visual representations and underscores the importance of the processing performed in the periphery of sensory systems.

SeminarNeuroscience

The evolution of computation in the brain: Insights from studying the retina

Tom Baden
University of Sussex (UK)
Jun 1, 2022

The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?

SeminarNeuroscience

Molecular Logic of Synapse Organization and Plasticity

Tabrez Siddiqui
University of Manitoba
May 30, 2022

Connections between nerve cells called synapses are the fundamental units of communication and information processing in the brain. The accurate wiring of neurons through synapses into neural networks or circuits is essential for brain organization. Neuronal networks are sculpted and refined throughout life by constant adjustment of the strength of synaptic communication by neuronal activity, a process known as synaptic plasticity. Deficits in the development or plasticity of synapses underlie various neuropsychiatric disorders, including autism, schizophrenia and intellectual disability. The Siddiqui lab research program comprises three major themes. One, to assess how biochemical switches control the activity of synapse organizing proteins, how these switches act through their binding partners and how these processes are regulated to correct impaired synaptic function in disease. Two, to investigate how synapse organizers regulate the specificity of neuronal circuit development and how defined circuits contribute to cognition and behaviour. Three, to address how synapses are formed in the developing brain and maintained in the mature brain and how microcircuits formed by synapses are refined to fine-tune information processing in the brain. Together, these studies have generated fundamental new knowledge about neuronal circuit development and plasticity and enabled us to identify targets for therapeutic intervention.

SeminarNeuroscienceRecording

The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)

Elizabeth Jefferies
Department of Psychology, University of York, UK
May 24, 2022

Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.

SeminarNeuroscience

Synthetic and natural images unlock the power of recurrency in primary visual cortex

Andreea Lazar
Ernst Strüngmann Institute (ESI) for Neuroscience
May 19, 2022

During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.

SeminarNeuroscienceRecording

The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules

Megan Porter
University of Hawaii
May 1, 2022

Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.

SeminarNeuroscience

Mapping the Dynamics of the Linear and 3D Genome of Single Cells in the Developing Brain

Longzhi Tan
Stanford
Mar 29, 2022

Three intimately related dimensions of the mammalian genome—linear DNA sequence, gene transcription, and 3D genome architecture—are crucial for the development of nervous systems. Changes in the linear genome (e.g., de novo mutations), transcriptome, and 3D genome structure lead to debilitating neurodevelopmental disorders, such as autism and schizophrenia. However, current technologies and data are severely limited: (1) 3D genome structures of single brain cells have not been solved; (2) little is known about the dynamics of single-cell transcriptome and 3D genome after birth; (3) true de novo mutations are extremely difficult to distinguish from false positives (DNA damage and/or amplification errors). Here, I filled in this longstanding technological and knowledge gap. I recently developed a high-resolution method—diploid chromatin conformation capture (Dip-C)—which resolved the first 3D structure of the human genome, tackling a longstanding problem dating back to the 1880s. Using Dip-C, I obtained the first 3D genome structure of a single brain cell, and created the first transcriptome and 3D genome atlas of the mouse brain during postnatal development. I found that in adults, 3D genome “structure types” delineate all major cell types, with high correlation between chromatin A/B compartments and gene expression. During development, both transcriptome and 3D genome are extensively transformed in the first month of life. In neurons, 3D genome is rewired across scales, correlated with gene expression modules, and independent of sensory experience. Finally, I examined allele-specific structure of imprinted genes, revealing local and chromosome-wide differences. More recently, I expanded my 3D genome atlas to the human and mouse cerebellum—the most consistently affected brain region in autism. I uncovered unique 3D genome rewiring throughout life, providing a structural basis for the cerebellum’s unique mode of development and aging. In addition, to accurately measure de novo mutations in a single cell, I developed a new method—multiplex end-tagging amplification of complementary strands (META-CS), which eliminates nearly all false positives by virtue of DNA complementarity. Using META-CS, I determined the true mutation spectrum of single human brain cells, free from chemical artifacts. Together, my findings uncovered an unknown dimension of neurodevelopment, and open up opportunities for new treatments for autism and other developmental disorders.

SeminarNeuroscienceRecording

Analogical Reasoning with Neuro-Symbolic AI

Hiroshi Honda
Keio University
Feb 23, 2022

Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.

SeminarNeuroscience

Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?

Dorothea Koert
Technical Universtiy Darmstadt
Dec 14, 2021

While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.

SeminarNeuroscience

Scaffolding up from Social Interactions: A proposal of how social interactions might shape learning across development

Sarah Gerson
Cardiff University
Dec 8, 2021

Social learning and analogical reasoning both provide exponential opportunities for learning. These skills have largely been studied independently, but my future research asks how combining skills across previously independent domains could add up to more than the sum of their parts. Analogical reasoning allows individuals to transfer learning between contexts and opens up infinite opportunities for innovation and knowledge creation. Its origins and development, so far, have largely been studied in purely cognitive domains. Constraining analogical development to non-social domains may mistakenly lead researchers to overlook its early roots and limit ideas about its potential scope. Building a bridge between social learning and analogy could facilitate identification of the origins of analogical reasoning and broaden its far-reaching potential. In this talk, I propose that the early emergence of social learning, its saliency, and its meaningful context for young children provides a springboard for learning. In addition to providing a strong foundation for early analogical reasoning, the social domain provides an avenue for scaling up analogies in order to learn to learn from others via increasingly complex and broad routes.

SeminarNeuroscienceRecording

NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks

Athena Akrami
University College London
Dec 1, 2021

The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.

SeminarNeuroscience

Computational Principles of Event Memory

Ken Norman
Princeton University
Dec 1, 2021

Our ability to understand ongoing events depends critically on general knowledge about how different kinds of situations work (schemas), and also on recollection of specific instances of these situations that we have previously experienced (episodic memory). The consensus around this general view masks deep questions about how these two memory systems interact to support event understanding: How do we build our library of schemas? and how exactly do we use episodic memory in the service of event understanding? Given rich, continuous inputs, when do we store and retrieve episodic memory “snapshots”, and how are they organized so as to ensure that we can retrieve the right snapshots at the right time? I will develop predictions about how these processes work using memory augmented neural networks (i.e., neural networks that learn how to use episodic memory in the service of task performance), and I will present results from relevant fMRI and behavioral studies.

SeminarNeuroscience

Second National Training Course on Sleep Medicine

Birgit Frauscher, MD, Brian Murray, MD, Ron Postuma, MD
Nov 17, 2021

Many patients presenting to neurology either have primary sleep disorders or suffer from sleep comorbidity. Knowledge on the diagnosis, differential diagnostic considerations, and management of these disorders is therefore mandatory for the general neurologist. This comprehensive course may serve to fulfill part of the preparation requirements for trainees seeking to complete the Royal College Examinations in Neurology. This training course is for R4 and R5 residents in Canadian neurology training programs as well as neurologists.

SeminarNeuroscience

The influence of menstrual cycle on the indices of cortical excitability

Vladimir Djurdjevic
HSE University
Nov 17, 2021

Menstruation is a normal physiological process in women occurring as a result of changes in two ovarian produced hormones – estrogen and progesterone. As a result of these fluctuations, women experience different symptoms in their bodies – their immune system changes (Sekigawa et al, 2004), there are changes in their cardiovascular and digestive system (Millikan, 2006), as well as skin (Hall and Phillips, 2005). But these hormone fluctuations produce major changes in their behavioral pattern as well causing: anxiety, sadness, heightened irritability and anger (Severino and Moline, 1995) which is usually classified as premenstrual syndrome (PMS). In some cases these symptoms severely impair women’s lives and professional help is required. The official diagnosis according to DSM-5 (2013) is premenstrual dysphoric disorder (PMDD). Despite its ubiquitous presence the origins of PMS and PMDD are poorly understood. Some efforts to understand the underlying brain state during the menstruation cycle were performed by using TMS (Smith et al, 1999; 2002; 2003; Inghilleri et al, 2004; Hausmann et al, 2006). But all of these experiments suffer from major shortcomings - no control groups and small number of subjects. Our plan is to address all of these shortcomings and make this the biggest (to our knowledge) experiment of its kind which will, hopefully, provide us with some much needed answers.

SeminarNeuroscience

Neural mechanisms of altered states of consciousness under psychedelics

Adeel Razi and Devon Stoliker
Monash Biomedical Imaging
Nov 10, 2021

Interest in psychedelic compounds is growing due to their remarkable potential for understanding altered neural states and their breakthrough status to treat various psychiatric disorders. However, there are major knowledge gaps regarding how psychedelics affect the brain. The Computational Neuroscience Laboratory at the Turner Institute for Brain and Mental Health, Monash University, uses multimodal neuroimaging to test hypotheses of the brain’s functional reorganisation under psychedelics, informed by the accounts of hierarchical predictive processing, using dynamic causal modelling (DCM). DCM is a generative modelling technique which allows to infer the directed connectivity among brain regions using functional brain imaging measurements. In this webinar, Associate Professor Adeel Razi and PhD candidate Devon Stoliker will showcase a series of previous and new findings of how changes to synaptic mechanisms, under the control of serotonin receptors, across the brain hierarchy influence sensory and associative brain connectivity. Understanding these neural mechanisms of subjective and therapeutic effects of psychedelics is critical for rational development of novel treatments and for the design and success of future clinical trials. Associate Professor Adeel Razi is a NHMRC Investigator Fellow and CIFAR Azrieli Global Scholar at the Turner Institute of Brain and Mental Health, Monash University. He performs cross-disciplinary research combining engineering, physics, and machine-learning. Devon Stoliker is a PhD candidate at the Turner Institute for Brain and Mental Health, Monash University. His interest in consciousness and psychiatry has led him to investigate the neural mechanisms of classic psychedelic effects in the brain.

SeminarNeuroscienceRecording

Visual Decisions in Natural Action

Mary Hayhoe
University of Texas, Austin
Nov 8, 2021

Natural behavior reveals the way that gaze serves the needs of the current task, and the complex cognitive control mechanisms that are involved. It has become increasingly clear that even the simplest actions involve complex decision processes that depend on an interaction of visual information, knowledge of the current environment, and the intrinsic costs and benefits of actions choices. I will explore these ideas in the context of walking in natural terrain, where we are able to recover the 3D structure of the visual environment. We show that subjects choose flexible paths that depend on the flatness of the terrain over the next few steps. Subjects trade off flatness with straightness of their paths towards the goal, indicating a nuanced trade-off between stability and energetic costs on both the time scale of the next step and longer-range constraints.

SeminarArtificial Intelligence

Seeing things clearly: Image understanding through hard-attention and reasoning with structured knowledges

Jonathan Gerrand
University of the Witwatersrand
Nov 3, 2021

In this talk, Jonathan aims to frame the current challenges of explainability and understanding in ML-driven approaches to image processing, and their potential solution through explicit inference techniques.

SeminarNeuroscienceRecording

Spike-based embeddings for multi-relational graph data

Dominik Dold
European Space Research and Technology Centre
Nov 1, 2021

A rich data representation that finds wide application in industry and research is the so-called knowledge graph - a graph-based structure where entities are depicted as nodes and relations between them as edges. Complex systems like molecules, social networks and industrial factory systems can be described using the common language of knowledge graphs, allowing the usage of graph embedding algorithms to make context-aware predictions in these information-packed environments.

ePoster

Learning an environment model in real-time with core knowledge and closed-loop behaviours

Giulia Lafratta, Bernd Porr, Christopher Chandler, Alice Miller

Bernstein Conference 2024

ePoster

Associative memory of structured knowledge

COSYNE 2022

ePoster

Evolution of neural activity in circuits bridging sensory and abstract knowledge

COSYNE 2022

ePoster

Revealing latent knowledge in cortical networks during goal-directed learning

COSYNE 2022

ePoster

Revealing latent knowledge in cortical networks during goal-directed learning

COSYNE 2022

ePoster

Coordinated geometric representations of learned knowledge in hippocampus and frontal cortex

Manuel Schottdorf, Joshua B. Julian, Jesse C. Kaminsky, Carlos Brody, David W. Tank*

COSYNE 2023

ePoster

Neural mechanisms of relational learning and fast knowledge reassembly

Thomas Miconi, Kenneth Kay

COSYNE 2025

ePoster

Rapid emergence of latent knowledge in the sensory cortex drives learning

Celine Drieu, Ziyi Zhu, Joy Wang, Kishore V. Kuchibhotla, Kylie Fuller, Aaron Wang, Sarah Elnozahy

COSYNE 2025

ePoster

Cascading memory search as a bridge between episodic memories and semantic knowledge

Achiel Fenneman, Claus Lamm

FENS Forum 2024

ePoster

Incorporating new with old knowledge – curricular learning in anterior cingulate cortex

Elisabeth Abs, Roman Boehringer, Benjamin F. Grewe

FENS Forum 2024

ePoster

Measuring integration of novel and pre-existing knowledge in hippocampus and neocortex

Angela Zordan, Jeroen Bos, Bruce McNaughton, Francesco Battaglia

FENS Forum 2024

ePoster

Navigating through the entorhinal cortex: Combining single-cell electrophysiology and RNA sequencing to advance our knowledge on the neuronal architecture

Eliška Waloschková, Attila Ozsvar, Wen-Hsien Hou, Konstantin Khodosevich, Martin Hemberg, Jan Gorodkin, Stefan Seemann, Vanessa Hall

FENS Forum 2024

ePoster

The short and long of motor practice sessions: Equal performance gains but different “how to” knowledge

Gil Leizerowitz, Ran Gabai, Meir Plotnik, Ofer Keren, Avi Karni

FENS Forum 2024