Pet
PET
Nicolas Langer
The successful candidate will work on the Synapsis-foundation funded research project “Real-life activity tracking as pre-screening tool for early stages of Alzheimer disease”. The aim of the project is to investigate whether real-life activity measures, derived from wearable technology (e.g. GPS and accelerometer data), are sensitive to identify early stages of Alzheimer’s disease. Further, we aim to provide evidence that these real-life activity measures are associated with current AD biomarkers (i.e. high Amyloid level and brain atrophy). The student will be expected to disseminate study results in peer reviewed journals, and to supervise Master’s students. The candidate will work in the team of Prof. Nicolas Langer, who is also part of the Neuroscience Center Zurich (ZNZ) (https://www.neuroscience.uzh.ch/en.html), which offers a renowned international PhD programme in Neuroscience. The candidate will work closely with the Institute for Regenerative Medicine (https://www.irem.uzh.ch/en.html), Geographic Information Systems (https://www.geo.uzh.ch/en/units/gis.html), University Research Priority Programme from the University of Zurich “Dynamics of Healthy Aging” (https://www.dynage.uzh.ch/en.html), and the Department of Computer Science at the ETH Zurich (https://www.systems.ethz.ch/). For further information, please visit: https://tinyurl.com/yvyjkvv9
Competing Rhythms: Understanding and Modulating Auditory Neural Entrainment
Low intensity rTMS: age dependent effects, and mechanisms underlying neural plasticity
Neuroplasticity is essential for the establishment and strengthening of neural circuits. Repetitive transcranial magnetic stimulation (rTMS) is commonly used to modulate cortical excitability and shows promise in the treatment of some neurological disorders. Low intensity magnetic stimulation (LI-rTMS), which does not directly elicit action potentials in the stimulated neurons, have also shown some therapeutic effects, and it is important to determine the biological mechanisms underlying the effects of these low intensity magnetic fields, such as would occur in the regions surrounding the central high-intensity focus of rTMS. Our team has used a focal low-intensity (10mT) magnetic stimulation approach to address some of these questions and to identify cellular mechanisms. I will present several studies from our laboratory, addressing (1) effects of LIrTMS on neuronal activity and excitability ; and (2) neuronal morphology and post-lesion repair. The ensemble of our results indicate that the effects of LI-rTMS depend upon the stimulation pattern, the age of the animal, and the presence of cellular magnetoreceptors.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Decomposing motivation into value and salience
Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.
Llama 3.1 Paper: The Llama Family of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
The multi-phase plasticity supporting winner effect
Aggression is an innate behavior across animal species. It is essential for competing for food, defending territory, securing mates, and protecting families and oneself. Since initiating an attack requires no explicit learning, the neural circuit underlying aggression is believed to be genetically and developmentally hardwired. Despite being innate, aggression is highly plastic. It is influenced by a wide variety of experiences, particularly winning and losing previous encounters. Numerous studies have shown that winning leads to an increased tendency to fight while losing leads to flight in future encounters. In the talk, I will present our recent findings regarding the neural mechanisms underlying the behavioral changes caused by winning.
Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy
In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.
Closed-loop deep brain stimulation as a neuroprosthetic of dopaminergic circuits – Current evidence and future opportunities; Spatial filtering to enhance signal processing in invasive neurophysiology
On Thursday February 15th, we will host Victoria Peterson and Julian Neumann. Victoria will tell us about “Spatial filtering to enhance signal processing in invasive neurophysiology”. Besides his scientific presentation on “Closed-loop deep brain stimulation as a neuroprosthetic of dopaminergic circuits – Current evidence and future opportunities”, Julian will give us a glimpse at the person behind the science. The talks will be followed by a shared discussion. Note: The talks will exceptionally be held at 10 ET / 4PM CET. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Use of brain imaging data to improve prescriptions of psychotropic drugs - Examples of ketamine in depression and antipsychotics in schizophrenia
The use of molecular imaging, particularly PET and SPECT, has significantly transformed the treatment of schizophrenia with antipsychotic drugs since the late 1980s. It has offered insights into the links between drug target engagement, clinical effects, and side effects. A therapeutic window for receptor occupancy is established for antipsychotics, yet there is a divergence of opinions regarding the importance of blood levels, with many downplaying their significance. As a result, the role of therapeutic drug monitoring (TDM) as a personalized therapy tool is often underrated. Since molecular imaging of antipsychotics has focused almost entirely on D2-like dopamine receptors and their potential to control positive symptoms, negative symptoms and cognitive deficits are hardly or not at all investigated. Alternative methods have been introduced, i.e. to investigate the correlation between approximated receptor occupancies from blood levels and cognitive measures. Within the domain of antidepressants, and specifically regarding ketamine's efficacy in depression treatment, there is limited comprehension of the association between plasma concentrations and target engagement. The measurement of AMPA receptors in the human brain has added a new level of comprehension regarding ketamine's antidepressant effects. To ensure precise prescription of psychotropic drugs, it is vital to have a nuanced understanding of how molecular and clinical effects interact. Clinician scientists are assigned with the task of integrating these indispensable pharmacological insights into practice, thereby ensuring a rational and effective approach to the treatment of mental health disorders, signaling a new era of personalized drug therapy mechanisms that promote neuronal plasticity not only under pathological conditions, but also in the healthy aging brain.
Algonauts 2023 winning paper journal club (fMRI encoding models)
Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf
Immunosuppression for Parkinson's disease - a new therapeutic strategy?
Caroline Williams-Gray is a Principal Research Associate in the Department of Clinical Neurosciences, University of Cambridge, and an honorary consultant neurologist specializing in Parkinson’s disease and movement disorders. She leads a translational research group investigating the clinical and biological heterogeneity of PD, with the ultimate goal of developing more targeted therapies for different Parkinson’s subtypes. Her recent work has focused on the theory that the immune system plays a significant role in mediating the heterogeneity of PD and its progression. Her lab is investigating this using blood and CSF -based immune markers, PET neuroimaging and neuropathology in stratified PD cohorts; and she is leading the first randomized controlled trial repurposing a peripheral immunosuppressive drug (azathioprine) to slow the progression of PD.
Euclidean coordinates are the wrong prior for primate vision
The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.
Estimating repetitive spatiotemporal patterns from resting-state brain activity data
Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.
Beyond Volition
Voluntary actions are actions that agents choose to make. Volition is the set of cognitive processes that implement such choice and initiation. These processes are often held essential to modern societies, because they form the cognitive underpinning for concepts of individual autonomy and individual responsibility. Nevertheless, psychology and neuroscience have struggled to define volition, and have also struggled to study it scientifically. Laboratory experiments on volition, such as those of Libet, have been criticised, often rather naively, as focussing exclusively on meaningless actions, and ignoring the factors that make voluntary action important in the wider world. In this talk, I will first review these criticisms, and then look at extending scientific approaches to volition in three directions that may enrich scientific understanding of volition. First, volition becomes particularly important when the range of possible actions is large and unconstrained - yet most experimental paradigms involve minimal response spaces. We have developed a novel paradigm for eliciting de novo actions through verbal fluency, and used this to estimate the elusive conscious experience of generativity. Second, volition can be viewed as a mechanism for flexibility, by promoting adaptation of behavioural biases. This view departs from the tradition of defining volition by contrasting internally-generated actions with externally-triggered actions, and instead links volition to model-based reinforcement learning. By using the context of competitive games to re-operationalise the classic Libet experiment, we identified a form of adaptive autonomy that allows agents to reduce biases in their action choices. Interestingly, this mechanism seems not to require explicit understanding and strategic use of action selection rules, in contrast to classical ideas about the relation between volition and conscious, rational thought. Third, I will consider volition teleologically, as a mechanism for achieving counterfactual goals through complex problem-solving. This perspective gives a key role in mediating between understanding and planning on the one hand, and instrumental action on the other hand. Taken together, these three cognitive phenomena of generativity, flexibility, and teleology may partly explain why volition is such an important cognitive function for organisation of human behaviour and human flourishing. I will end by discussing how this enriched view of volition can relate to individual autonomy and responsibility.
Development of an open-source femtosecond fiber laser system for multiphoton microscopy
This talk will present a low-cost protocol for fabricating an easily constructed femtosecond (fs) fiber laser system suitable for routine multiphoton microscopy (1060–1080 nm, 1 W average power, 70 fs pulse duration, 30–70 MHz repetition rate). Concepts well-known in the laser physics community essential to proper laser operation, but generally obscure to biophysicists and biomedical engineers, will be clarified. The parts list (~$13K US dollars), the equipment list (~$40K+), and the intellectual investment needed to build the laser will be described. A goal of the presentation will be to engage with the audience to discuss trade-offs associated with a custom-built fs fiber laser versus purchasing a commercial system. I will also touch on my research group’s plans to further develop this custom laser system for multiplexed cancer imaging as well as recent developments in the field that promise even higher performance fs fiber lasers for approximately the same cost and ease of construction.
Cognition in the Wild
What do nonhuman primates know about each other and their social environment, how do they allocate their attention, and what are the functional consequences of social decisions in natural settings? Addressing these questions is crucial to hone in on the co-evolution of cognition, social behaviour and communication, and ultimately the evolution of intelligence in the primate order. I will present results from field experimental and observational studies on free-ranging baboons, which tap into the cognitive abilities of these animals. Baboons are particularly valuable in this context as different species reveal substantial variation in social organization and degree of despotism. Field experiments revealed considerable variation in the allocation of social attention: while the competitive chacma baboons were highly sensitive to deviations from the social order, the highly tolerant Guinea baboons revealed a confirmation bias. This bias may be a result of the high gregariousness of the species, which puts a premium on ignoring social noise. Variation in despotism clearly impacted the use of signals to regulate social interactions. For instance, male-male interactions in chacma baboons mostly comprised dominance displays, while Guinea baboon males evolved elaborate greeting rituals that serve to confirm group membership and test social bonds. Strikingly, the structure of signal repertoires does not differ substantially between different baboon species. In conclusion, the motivational disposition to engage in affiliation or aggressiveness appears to be more malleable during evolution than structural elements of the behavioral repertoire; this insight is crucial for understanding the dynamics of social evolution.
Integrative Neuromodulation: from biomarker identification to optimizing neuromodulation
Why do we make decisions impulsively blinded in an emotionally rash moment? Or caught in the same repetitive suboptimal loop, avoiding fears or rushing headlong towards illusory rewards? These cognitive constructs underlying self-control and compulsive behaviours and their influence by emotion or incentives are relevant dimensionally across healthy individuals and hijacked across disorders of addiction, compulsivity and mood. My lab focuses on identifying theory-driven modifiable biomarkers focusing on these cognitive constructs with the ultimate goal to optimize and develop novel means of neuromodulation. Here I will provide a few examples of my group’s recent work to illustrate this approach. I describe a series of recent studies on intracranial physiology and acute stimulation focusing on risk taking and emotional processing. This talk highlights the subthalamic nucleus, a common target for deep brain stimulation for Parkinson’s disease and obsessive-compulsive disorder. I further describe recent translational work in non-invasive neuromodulation. Together these examples illustrate the approach of the lab highlighting modifiable biomarkers and optimizing neuromodulation.
Multisensory processing of anticipatory and consummatory food cues
Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties
Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.
Gut food cravings? How gut signals control appetite and metabolism
Gut-derived signals regulate metabolism, appetite, and behaviors important for mental health. We have performed a large-scale multidimensional screen to identify gut hormones and nutrient-sensing mechanisms in the intestine that regulate metabolism and behavior in the fruit fly Drosophila. We identified several gut hormones that affect fecundity, stress responses, metabolism, feeding, and sleep behaviors, many of which seem to act sex-specifically. We show that in response to nutrient intake, the enteroendocrine cells (EECs) of the adult Drosophila midgut release hormones that act via inter-organ relays to coordinate metabolism and feeding decisions. These findings suggest that crosstalk between the gut and other tissues regulates food choice according to metabolic needs, providing insight into how that intestine processes nutritional inputs and into the gut-derived signals that relay information regulating nutrient-specific hungers to maintain metabolic homeostasis.
Training Dynamic Spiking Neural Network via Forward Propagation Through Time
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.
Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings
Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.
Chemistry of the adaptive mind: lessons from dopamine
The human brain faces a variety of computational dilemmas, including the flexibility/stability, the speed/accuracy and the labor/leisure tradeoff. I will argue that striatal dopamine is particularly well suited to dynamically regulate these computational tradeoffs depending on constantly changing task demands. This working hypothesis is grounded in evidence from recent studies on learning, motivation and cognitive control in human volunteers, using chemical PET, psychopharmacology, and/or fMRI. These studies also begin to elucidate the mechanisms underlying the huge variability in catecholaminergic drug effects across different individuals and across different task contexts. For example, I will demonstrate how effects of the most commonly used psychostimulant methylphenidate on learning, Pavlovian and effortful instrumental control depend on fluctuations in current environmental volatility, on individual differences in working memory capacity and on opportunity cost respectively.
PET imaging in brain diseases
Talk 1. PET based biomarkers of treatment efficacy in temporal lobe epilepsy A critical aspect of drug development involves identifying robust biomarkers of treatment response for use as surrogate endpoints in clinical trials. However, these biomarkers also have the capacity to inform mechanisms of disease pathogenesis and therapeutic efficacy. In this webinar, Dr Bianca Jupp will report on a series of studies using the GABAA PET ligand, [18F]-Flumazenil, to establish biomarkers of treatment response to a novel therapeutic for temporal lobe epilepsy, identifying affinity at this receptor as a key predictor of treatment outcome. Dr Bianca Jupp is a Research Fellow in the Department of Neuroscience, Monash University and Lead PET/CT Scientist at the Alfred Research Alliance–Monash Biomedical Imaging facility. Her research focuses on neuroimaging and its capacity to inform the neurobiology underlying neurological and neuropsychiatric disorders. Talk 2. The development of a PET radiotracer for reparative microglia Imaging of neuroinflammation is currently hindered by the technical limitations associated with TSPO imaging. In this webinar, Dr Lucy Vivash will discuss the development of PET radiotracers that specifically image reparative microglia through targeting the receptor kinase MerTK. This includes medicinal chemistry design and testing, radiochemistry, and in vitro and in vivo testing of lead tracers. Dr Lucy Vivash is a Research Fellow in the Department of Neuroscience, Monash University. Her research focuses on the preclinical development and clinical translation of novel PET radiotracers for the imaging of neurodegenerative diseases.
The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)
Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.
Growing a world-class precision medicine industry
Monash Biomedical Imaging is part of the new $71.2 million Australian Precision Medicine Enterprise (APME) facility, which will deliver large-scale development and manufacturing of precision medicines and theranostic radiopharmaceuticals for industry and research. A key feature of the APME project is a high-energy cyclotron with multiple production clean rooms, which will be located on the Monash Biomedical Imaging (MBI) site in Clayton. This strategic co-location will facilitate radiochemistry, PET and SPECT research and clinical use of theranostic (therapeutic and diagnostic) radioisotopes produced on-site. In this webinar, MBI’s Professor Gary Egan and Dr Maggie Aulsebrook will explain how the APME will secure Australia’s supply of critical radiopharmaceuticals, build a globally competitive Australian manufacturing hub, and train scientists and engineers for the Australian workforce. They will cover the APME’s state-of-the-art 30 MeV and 18-24 MeV cyclotrons and radiochemistry facilities, as well as the services that will be accessible to students, scientists, clinical researchers, and pharmaceutical companies in Australia and around the world. The APME is a collaboration between Monash University, Global Medical Solutions Australia, and Telix Pharmaceuticals. Professor Gary Egan is Director of Monash Biomedical Imaging, Director of the ARC Centre of Excellence for Integrative Brain Function and a Distinguished Professor at the Turner Institute for Brain and Mental Health, Monash University. He is also lead investigator of the Victorian Biomedical Imaging Capability, and Deputy Director of the Australian National Imaging Facility. Dr Maggie Aulsebrook obtained her PhD in Chemistry at Monash University and specialises in the development and clinical translation of radiopharmaceuticals. She has led the development of several investigational radiopharmaceuticals for first-in-human application. Maggie leads the Radiochemistry Platform at Monash Biomedical Imaging.
Synthetic and natural images unlock the power of recurrency in primary visual cortex
During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.
Meta-learning synaptic plasticity and memory addressing for continual familiarity detection
Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task where a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously, and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning, and demonstrates an effective application of machine learning for neuroscience discovery.
Neural Representations of Social Homeostasis
How does our brain rapidly determine if something is good or bad? How do we know our place within a social group? How do we know how to behave appropriately in dynamic environments with ever-changing conditions? The Tye Lab is interested in understanding how neural circuits important for driving positive and negative motivational valence (seeking pleasure or avoiding punishment) are anatomically, genetically and functionally arranged. We study the neural mechanisms that underlie a wide range of behaviors ranging from learned to innate, including social, feeding, reward-seeking and anxiety-related behaviors. We have also become interested in “social homeostasis” -- how our brains establish a preferred set-point for social contact, and how this maintains stability within a social group. How are these circuits interconnected with one another, and how are competing mechanisms orchestrated on a neural population level? We employ optogenetic, electrophysiological, electrochemical, pharmacological and imaging approaches to probe these circuits during behavior.
Brain and Mind: Who is the Puppet and who the Puppeteer?
If the mind controls the brain, then there is free will and its corollaries, dignity and responsibility. You are king in your skull-sized kingdom and the architect of your destiny. If, on the other hand, the brain controls the mind, an incendiary conclusion follows: There can be no free will, no praise, no punishment and no purgatory. In this webinar, Professor George Paxinos will discuss his highly respected work on the construction of human and experimental animal brain atlases. He has discovered 94 brain regions, 64 homologies and published 58 books. His first book, The Rat Brain in Stereotaxic Coordinates, is the most cited publication in neuroscience and, for three decades, the third most cited book in science. Professor Paxinos will also present his recently published novel, A River Divided, which was 21 years in the making. Neuroscience principles were used in the formation of charters, such as those related to the mind, soul, free will and consciousness. Environmental issues are at the heart of the novel, including the question of whether the brain is the right ‘size’ for survival. Professor Paxinos studied at Berkeley, McGill and Yale and is now Scientia Professor of Medical Sciences at Neuroscience Research Australia and The University of New South Wales in Sydney.
The Brain Conference (the Guarantors of Brain)
Join the Brain Conference on 24-25 February 2022 for the opportunity to hear from neurology’s leading scientists and clinicians. The two-day virtual programme features clinical teaching talks and research presentations from expert speakers including neuroscientist Professor Gina Poe, and the winner of the 2021 Brain Prize, neurologist Professor Peter Goadsby." "Tickets for The Brain Conference 2022 cost just £30, but register with promotional code BRAINCONEM20 for a discounted rate of £25.
The Brain Conference (the Guarantors of Brain)
Join the Brain Conference on 24-25 February 2022 for the opportunity to hear from neurology’s leading scientists and clinicians. The two-day virtual programme features clinical teaching talks and research presentations from expert speakers including neuroscientist Professor Gina Poe, and the winner of the 2021 Brain Prize, neurologist Professor Peter Goadsby." "Tickets for The Brain Conference 2022 cost just £30, but register with promotional code BRAINCONEM20 for a discounted rate of £25.
Towards model-based control of active matter: active nematics and oscillator networks
The richness of active matter's spatiotemporal patterns continues to capture our imagination. Shaping these emergent dynamics into pre-determined forms of our choosing is a grand challenge in the field. To complicate matters, multiple dynamical attractors can coexist in such systems, leading to initial condition-dependent dynamics. Consequently, non-trivial spatiotemporal inputs are generally needed to access these states. Optimal control theory provides a general framework for identifying such inputs and represents a promising computational tool for guiding experiments and interacting with various systems in soft active matter and biology. As an exemplar, I first consider an extensile active nematic fluid confined to a disk. In the absence of control, the system produces two topological defects that perpetually circulate. Optimal control identifies a time-varying active stress field that restructures the director field, flipping the system to its other attractor that rotates in the opposite direction. As a second, analogous case, I examine a small network of coupled Belousov-Zhabotinsky chemical oscillators that possesses two dominant attractors, two wave states of opposing chirality. Optimal control similarly achieves the task of attractor switching. I conclude with a few forward-looking remarks on how the same model-based control approach might come to bear on problems in biology.
From bench to clinic – Translating fundamental neuroscience into real-life healthcare practices, and developing nationally recognised life science companies
Dr. Ryan C.N. D’Arcy is a Canadian neuroscientist, researcher, innovator and entrepreneur. Dr. D'Arcy co-founded HealthTech Connex Inc. and serves as President and Chief Scientific Officer. HealthTech Connex translates neuroscience advances into health technology breakthroughs. D'Arcy is most known for coining the term "brain vital signs" and for leading the research and development of the brain vital signs framework. Dr. D’Arcy also holds a BC Leadership Chair in Medical Technology, is a full Professor at Simon Fraser University, and a member of the DM Centre for Brain Health at the University of British Columbia. He has published more than 260 academic works, attracted more than $85 Million CAD in competitive research and innovation funding, and been recognized through numerous awards and distinctions. Please join us for an exciting virtual talk with Dr. D'Arcy who will speak on some of the current research he is involved in, how he is translating this research into real-life applications, and the development of HealthTech Connects Inc.
Brain and Mind: Who is the Puppet and who the Puppeteer?
If the mind controls the brain, then there is FREE WILL and its corollaries, dignity and responsibility. You are king in your skull-sized kingdom and the architect of your destiny. If, on the other hand, the brain controls the mind, an incendiary conclusion follows: There can be no FREE WILL, no praise, no punishment and no purgatory. There will be a presentation of the speaker’s novel which, inter alia, is concerned with this question: 21 year in the making this is the first presentation of A River Divided (environmental genre)
JAK/STAT regulation of the transcriptomic response during epileptogenesis
Temporal lobe epilepsy (TLE) is a progressive disorder mediated by pathological changes in molecular cascades and neural circuit remodeling in the hippocampus resulting in increased susceptibility to spontaneous seizures and cognitive dysfunction. Targeting these cascades could prevent or reverse symptom progression and has the potential to provide viable disease-modifying treatments that could reduce the portion of TLE patients (>30%) not responsive to current medical therapies. Changes in GABA(A) receptor subunit expression have been implicated in the pathogenesis of TLE, and the Janus Kinase/Signal Transducer and Activator of Transcription (JAK/STAT) pathway has been shown to be a key regulator of these changes. The JAK/STAT pathway is known to be involved in inflammation and immunity, and to be critical for neuronal functions such as synaptic plasticity and synaptogenesis. Our laboratories have shown that a STAT3 inhibitor, WP1066, could greatly reduce the number of spontaneous recurrent seizures (SRS) in an animal model of pilocarpine-induced status epilepticus (SE). This suggests promise for JAK/STAT inhibitors as disease-modifying therapies, however, the potential adverse effects of systemic or global CNS pathway inhibition limits their use. Development of more targeted therapeutics will require a detailed understanding of JAK/STAT-induced epileptogenic responses in different cell types. To this end, we have developed a new transgenic line where dimer-dependent STAT3 signaling is functionally knocked out (fKO) by tamoxifen-induced Cre expression specifically in forebrain excitatory neurons (eNs) via the Calcium/Calmodulin Dependent Protein Kinase II alpha (CamK2a) promoter. Most recently, we have demonstrated that STAT3 KO in excitatory neurons (eNSTAT3fKO) markedly reduces the progression of epilepsy (SRS frequency) in the intrahippocampal kainate (IHKA) TLE model and protects mice from kainic acid (KA)-induced memory deficits as assessed by Contextual Fear Conditioning. Using data from bulk hippocampal tissue RNA-sequencing, we further discovered a transcriptomic signature for the IHKA model that contains a substantial number of genes, particularly in synaptic plasticity and inflammatory gene networks, that are down-regulated after KA-induced SE in wild-type but not eNSTAT3fKO mice. Finally, we will review data from other models of brain injury that lead to epilepsy, such as TBI, that implicate activation of the JAK/STAT pathway that may contribute to epilepsy development.
Wiring Minimization of Deep Neural Networks Reveal Conditions in which Multiple Visuotopic Areas Emerge
The visual system is characterized by multiple mirrored visuotopic maps, with each repetition corresponding to a different visual area. In this work we explore whether such visuotopic organization can emerge as a result of minimizing the total wire length between neurons connected in a deep hierarchical network. Our results show that networks with purely feedforward connectivity typically result in a single visuotopic map, and in certain cases no visuotopic map emerges. However, when we modify the network by introducing lateral connections, with sufficient lateral connectivity among neurons within layers, multiple visuotopic maps emerge, where some connectivity motifs yield mirrored alternations of visuotopic maps–a signature of biological visual system areas. These results demonstrate that different connectivity profiles have different emergent organizations under the minimum total wire length hypothesis, and highlight that characterizing the large-scale spatial organizing of tuning properties in a biological system might also provide insights into the underlying connectivity.
Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?
While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.
NMC4 Short Talk: Rank similarity filters for computationally-efficient machine learning on high dimensional data
Real world datasets commonly contain nonlinearly separable classes, requiring nonlinear classifiers. However, these classifiers are less computationally efficient than their linear counterparts. This inefficiency wastes energy, resources and time. We were inspired by the efficiency of the brain to create a novel type of computationally efficient Artificial Neural Network (ANN) called Rank Similarity Filters. They can be used to both transform and classify nonlinearly separable datasets with many datapoints and dimensions. The weights of the filters are set using the rank orders of features in a datapoint, or optionally the 'confusion' adjusted ranks between features (determined from their distributions in the dataset). The activation strength of a filter determines its similarity to other points in the dataset, a measure based on cosine similarity. The activation of many Rank Similarity Filters transforms samples into a new nonlinear space suitable for linear classification (Rank Similarity Transform (RST)). We additionally used this method to create the nonlinear Rank Similarity Classifier (RSC), which is a fast and accurate multiclass classifier, and the nonlinear Rank Similarity Probabilistic Classifier (RSPC), which is an extension to the multilabel case. We evaluated the classifiers on multiple datasets and RSC is competitive with existing classifiers but with superior computational efficiency. Code for RST, RSC and RSPC is open source and was written in Python using the popular scikit-learn framework to make it easily accessible (https://github.com/KatharineShapcott/rank-similarity). In future extensions the algorithm can be applied to hardware suitable for the parallelization of an ANN (GPU) and a Spiking Neural Network (neuromorphic computing) with corresponding performance gains. This makes Rank Similarity Filters a promising biologically inspired solution to the problem of efficient analysis of nonlinearly separable data.
NMC4 Keynote: A network perspective on cognitive effort
Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.
Spontaneous activity competes with externally evoked responses in sensory cortex
The interaction between spontaneously and externally evoked neuronal activity is fundamental for a functional brain. Increasing evidence suggests that bursts of high-power oscillations in the 15-30 Hz beta-band represent activation of resting state networks and can mask perception of external cues. Yet demonstration of the effect of beta power modulation on perception in real-time is missing, and little is known about the underlying mechanism. In this talk I will present the methods we developed to fill this gap together with our recent results. We used a closed-loop stimulus-intensity adjustment system based on online burst-occupancy analyses in rats involved in a forepaw vibrotactile detection task. We found that the masking influence of burst-occupancy on perception can be counterbalanced in real-time by adjusting the vibration amplitude. Offline analysis of firing-rates and local field potentials across cortical layers and frequency bands confirmed that beta-power in the somatosensory cortex anticorrelated with sensory evoked responses. Mechanistically, bursts in all bands were accompanied by transient synchronization of cell assemblies, but only beta-bursts were followed by a reduction of firing-rate. Our closed loop approach reveals that spontaneous beta-bursts reflect a dynamic state that competes with external stimuli.
Computational Models of Fine-Detail and Categorical Information in Visual Working Memory: Unified or Separable Representations?
When we remember a stimulus we rarely maintain a full fidelity representation of the observed item. Our working memory instead maintains a mixture of the observed feature values and categorical/gist information. I will discuss evidence from computational models supporting a mix of categorical and fine-detail information in working memory. Having established the need for two memory formats in working memory, I will discuss whether categorical and fine-detailed information for a stimulus are represented separately or as a single unified representation. Computational models of these two potential cognitive structures make differing predictions about the pattern of responses in visual working memory recall tests. The present study required participants to remember the orientation of stimuli for later reproduction. The pattern of responses are used to test the competing representational structures and to quantify the relative amount of fine-detailed and categorical information maintained. The effects of set size, encoding time, serial order, and response order on memory precision, categorical information, and guessing rates are also explored. (This is a 60 min talk).
NeurotechEU Summit
Our first NeurotechEU Summit will be fully digital and will take place on November 22th from 09:00 to 17:00 (CET). The final programme can be downloaded here. Hosted by the Karolinska Institutet, the summit will provide you an overview of our actions and achievements from the last year and introduce the priorities for the next year. You will also have the opportunity to attend the finals of the 3 minute thesis competition (3MT) organized by the Synapses Student Society, the student charter of NeurotechEU. Good luck to all the finalists: Lynn Le, Robin Noordhof, Adriana Gea González, Juan Carranza Valencia, Lea van Husen, Guoming (Tony) Man, Lilly Pitshaporn Leelaarporn, Cemre Su, Kaya Keleş, Ramazan Tarık Türksoy, Cristiana Tisca, Sara Bandiera, Irina Maria Vlad, Iulia Vadan, Borbála László, and David Papp! Don’t miss our keynote lecture, success stories and interactive discussions with Ms Vanessa Debiais Sainton (Head of Higher Education Unit, European Commission), Prof. Staffan Holmin (Karolinska Institutet), Dr Mohsen Kaboli (BMW Group, member of the NeurotechEU Associates Advisory Committee), and Prof. Peter Hagoort (Max Planck Institute for Psycholinguistics, Donders Institute). Would you like to use this opportunity to network? Please join our informal breakout sessions on Wonder.me at 11:40 CET. You will be able to move from one discussion group to another within 3 sessions: NeurotechEU ecosystem - The Associates Advisory Committee: Synergies in cross-sectoral initiatives Education next: Trans-European education and the European Universities Initiatives - Lessons learned thus far. Equality, diversity and inclusion at NeurotechEU: removing access barriers to education and developing a working, learning, and social environment where everyone is respected and valued. You can register for this free event at www.crowdcast.io/e/neurotecheu-summit
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
From aura to neuroinflammation: Has imaging resolved the puzzle of migraine pathophysiology?
In this talk I will present data from imaging studies that we have been conducting for the past 20 years trying to shed light on migraine physiopathology, from anatomical and functional MRI to positron emission tomography.
Representation transfer and signal denoising through topographic modularity
To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.
Event-based Backpropagation for Exact Gradients in Spiking Neural Networks
Gradient-based optimization powered by the backpropagation algorithm proved to be the pivotal method in the training of non-spiking artificial neural networks. At the same time, spiking neural networks hold the promise for efficient processing of real-world sensory data by communicating using discrete events in continuous time. We derive the backpropagation algorithm for a recurrent network of spiking (leaky integrate-and-fire) neurons with hard thresholds and show that the backward dynamics amount to an event-based backpropagation of errors through time. Our derivation uses the jump conditions for partial derivatives at state discontinuities found by applying the implicit function theorem, allowing us to avoid approximations or substitutions. We find that the gradient exists and is finite almost everywhere in weight space, up to the null set where a membrane potential is precisely tangent to the threshold. Our presented algorithm, EventProp, computes the exact gradient with respect to a general loss function based on spike times and membrane potentials. Crucially, the algorithm allows for an event-based communication scheme in the backward phase, retaining the potential advantages of temporal sparsity afforded by spiking neural networks. We demonstrate the optimization of spiking networks using gradients computed via EventProp and the Yin-Yang and MNIST datasets with either a spike time-based or voltage-based loss function and report competitive performance. Our work supports the rigorous study of gradient-based optimization in spiking neural networks as well as the development of event-based neuromorphic architectures for the efficient training of spiking neural networks. While we consider the leaky integrate-and-fire model in this work, our methodology generalises to any neuron model defined as a hybrid dynamical system.
The brain control of appetite: Can an old dog teach us new tricks?
It is clear that the cause of obesity is a result of eating more than you burn. It is physics. What is more complex to answer is why some people eat more than others? Differences in our genetic make-up mean some of us are slightly more hungry all the time and so eat more than others. We now know that the genetics of body-weight, on which obesity sits on one end of the spectrum, is in actuality the genetics of appetite control. In contrast to the prevailing view, body-weight is not a choice. People who are obese are not bad or lazy; rather, they are fighting their biology.
Merging of cues and hunches by the mouse cortex
Many everyday decisions are based on both external cues and internal hunches. How does the brain put these together? We addressed this question in mice trained to make decisions based on sensory stimuli and on past events. While mice made these decisions, we causally probed the roles of cortical areas and recorded from thousands of neurons throughout the brain, with an emphasis on frontal cortex. The results are not what we thought based on textbook notions of how the brain works. This talk is based on work led by Nick Steinmetz, Peter Zatka-Haas, Armin Lak, and Pip Coen, in the laboratory I share with Kenneth Harris
The attentional requirement of unconscious processing
The tight relationship between attention and conscious perception has been extensively researched in the past decades. However, whether attentional modulation extended to unconscious processes remained largely unknown, particularly when it came to abstract and high-level processing. I will talk about a recent study where we utilized the Stroop paradigm to show that task load gates unconscious semantic processing. In a series of psychophysical experiments, the unconscious word semantics influenced conscious task performance only under the low task load condition, but not the high task load condition. Intriguingly, with enough practice in the high task load condition, the unconscious effect reemerged. These findings suggest a competition of attentional resources between unconscious and conscious processes, challenging the automaticity account of unconscious processing.
3 Minutes Thesis Competition: Pre-selection event
On behalf of NeurotechEU, we are pleased to invite you to participate in the Summit 2021 pre-selection event on October 23, 2021. The event will be held online via the Platform Crowdcast.io, and it is going to be organized by NeurotechEU-The European University of Brain and Technology. Students from all over NeurotechEU have the chance to present their research (bachelor’s thesis, Master’s thesis, PhD, post-doc…) following the methodology of three minutes thesis (3MT from the University of Queensland): https://threeminutethesis.uq.edu.au/resources/3mt-competitor-guide. There will be one session per university and at the end of it, two semi-finalists will be selected from each university. They will compete in the Summit 2021 on November 22nd. There will be prizes for the winners who will be selected by voting of the audience.
Will it keep me awake? Common caffeine intake habits and sleep in real life situations
Daily caffeine consumption and chronic sleep restriction are highly prevalent in society. It is well established that acute caffeine intake under controlled conditions enhances vigilance and promotes wakefulness but can also delay sleep initiation and reduce electroencephalographic (EEG) markers of sleep intensity, particularly in susceptible individuals. To investigate whether these effects are also present during chronic consumption of coffee/caffeine, we recently conducted several complementary studies. We examined whether repeated coffee intake in dose and timing mimicking ‘real world’ habits maintains simple and complex attentional processes during chronic sleep restriction, such as during a busy work week. We found in genetically caffeine-sensitive individuals that regular coffee (300 mg caffeine/day) benefits most attentional tasks for 3-4 days when compared to decaffeinated coffee. Genetic variants were also used in the population-based HypnoLaus cohort, to investigate whether habitual caffeine consumption causally affects time to fall asleep, number of awakenings during sleep, and EEG-derived sleep intensity. The multi-level statistical analyses consistently showed that sleep quality was virtually unaffected when >3 caffeine-containing beverages/day were compared to 0-3 beverages/day. This conclusion was further corroborated by quantifying the sleep EEG in the laboratory in habitual caffeine consumers. Compared to placebo, daily intake of 3 x 150 mg caffeine over 10 days did not strongly impair nocturnal sleep nor subjective sleep quality in good sleepers. Finally, we tested whether an engineered delayed, pulsatile-release caffeine formula can improve the quality of morning awakening in sleep-restricted volunteers. We found that 160 mg caffeine taken at bedtime ameliorated the quality of awakening, increased positive and reduced negative affect scores, and promoted sustained attention immediately upon scheduled wake-up. Such an approach could prevent over-night caffeine withdrawal and provide a proactive strategy to attenuate disabling sleep inertia. Taken together, the studies suggest that common coffee/caffeine intake habits can transiently attenuate detrimental consequences of reduced sleep virtually without disturbing subjective and objective markers of sleep quality. Nevertheless, coffee/caffeine consumption cannot compensate for chronic sleep restriction.
Top-down modulation of the retinal code via histaminergic neurons in the hypothalamus
The mammalian retina is considered an autonomous neuronal tissue, yet there is evidence that it receives inputs from the brain in the form of retinopetal axons. A sub-population of these axons was suggested to belong to histaminergic neurons located in the tuberomammillarynucleus (TMN) of the hypothalamus. Using viral injections to the TMN, we identified these retinopetal axons and found that although few in number, they extensively branch to cover a large portion of the retina. Using Ca2+ imaging and electrophysiology, we show that histamine application increases spontaneous firing rates and alters the light responses of a significant portion of retinal ganglion cells (RGCs). Direct activation of the histaminergic axons also induced significant changes in RGCs activity. Since activity in the TMN was shown to correlate with arousal state, our data suggest the retinal code may change with the animal's behavioral state through the release of histamine from TMN histaminergic neurons.
Metabolic and functional connectivity relate to distinct aspects of cognition
A major challenge of cognitive neuroscience is to understand how the brain as a network gives rise to our cognition. Simultaneous [18F]-fluorodeoxyglucose positron emission tomography functional magnetic resonance imaging (FDG-PET/fMRI) provides the opportunity to investigate brain connectivity not only via spatially distant, synchronous cerebrovascular hemodynamic responses (functional connectivity), but also glucose metabolism (metabolic connectivity). However, how these two modalities of brain connectivity differ in their relation to cognition is unknown. In this webinar, Dr Katharina Voigt will discuss recent findings demonstrating the advantage of simultaneous FDG-PET/fMRI in providing a more complete picture of the neural mechanisms underlying cognition, that calls for a combination of both modalities in future cognitive neuroscience. Dr Katharina Voigt is a Research Fellow within the Turner Institute for Brain and Mental Health, Monash University. Her research interests include systems neuroscience, simultaneous PET-MRI, and decision-making.
Mutation induced infection waves in diseases like COVID-19
After more than 4 million deaths worldwide, the ongoing vaccination to conquer the COVID-19 disease is now competing with the emergence of increasingly contagious mutations, repeatedly supplanting earlier strains. Following the near-absence of historical examples of the long-time evolution of infectious diseases under similar circumstances, models are crucial to exemplify possible scenarios. Accordingly, in the present work we systematically generalize the popular susceptible-infected-recovered model to account for mutations leading to repeatedly occurring new strains, which we coarse grain based on tools from statistical mechanics to derive a model predicting the most likely outcomes. The model predicts that mutations can induce a super exponential growth of infection numbers at early times, which self-amplify to giant infection waves which are caused by a positive feedback loop between infection numbers and mutations and lead to a simultaneous infection of the majority of the population. At later stages -- if vaccination progresses too slowly -- mutations can interrupt an ongoing decrease of infection numbers and can cause infection revivals which occur as single waves or even as whole wave trains featuring alternative periods of decreasing and increasing infection numbers. Our results might be useful for discussions regarding the importance of a release of vaccine-patents to reduce the risk of mutation-induced infection revivals but also to coordinate the release of measures following a downwards trend of infection numbers.
Do you hear what I see: Auditory motion processing in blind individuals
Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Competition and integration of sensory signals in a deep reinforcement learning agent
Bernstein Conference 2024
Modeling competitive memory encoding using a Hopfield network
Bernstein Conference 2024
The dynamical regime of mouse visual cortex shifts from cooperation to competition with increasing visual input
COSYNE 2022
Neural Representations of Opponent Strategy Support the Adaptive Behavior of Recurrent Actor-Critics in a Competitive Game
COSYNE 2022
Neural Representations of Opponent Strategy Support the Adaptive Behavior of Recurrent Actor-Critics in a Competitive Game
COSYNE 2022
Inhibitory control of plasticity promotes stability and competitive learning in recurrent networks
COSYNE 2023
Instinct vs Insight: Neural Competition Between Prefrontal and Auditory Cortex Constrains Sound Strategy Learning
COSYNE 2025
Understanding stochastic decision-making in competitive multi-agent environments
COSYNE 2025
Assessment of task-specific glucose metabolism with non-invasive functional PET
FENS Forum 2024
Butyrylcholinesterase is linked to obesity but does not regulate the appetite and glucose metabolism
FENS Forum 2024
Cognitive computational model reveals repetition bias in a sequential decision-making task
FENS Forum 2024
Cortical layer-specific repetition suppression to faces in the fusiform face area
FENS Forum 2024
New data in an animal model for schizophrenia: Ketamine-induced locomotor activity and repetitive behavioural responses are higher after neonatal functional blockade of the prefrontal cortex
FENS Forum 2024
Dynamic dynorphin release in the VTA during appetitive and aversive associative learning
FENS Forum 2024
The dynamic nature of memory: Heterosynaptic plasticity and the temporal rules of memory cooperation and competition
FENS Forum 2024
Dynamic representation of appetitive and aversive stimuli in nucleus accumbens shell D1- and D2-medium spiny neurons
FENS Forum 2024
Dynamic and state-dependent switching of behaviour in response to competing visual stimuli in Drosophila
FENS Forum 2024
Engram competition modulates infant memory expression in development
FENS Forum 2024
ErbB inhibition rescues nigral dopamine neuron hyperactivity and repetitive behaviors in a mouse model of fragile X syndrome
FENS Forum 2024
Hindbrain regulation of stress and cue-induced appetitive behaviour
FENS Forum 2024
Impact of two-week repetitive magnetic stimulation on microglia activity and neuronal plasticity
FENS Forum 2024
Inhibitory mechanisms in the dorsal anterior cingulate cortex differentially mediate putamen activity during appetitive and aversive valence-based learning
FENS Forum 2024
The ketogenic diet suppresses appetite altering orexigenic and anorexigenic pathways in hypothalamus of diet-induced obese and lean mice
FENS Forum 2024
Low-intensity repetitive pulsed ultrasound stimulation suppresses neural activity via effects on astrocytes
FENS Forum 2024
Mapping the sensorimotor connectome underlying protein-specific appetites in Drosophila melanogaster
FENS Forum 2024
The neurophysiological mechanisms of impaired manual dexterity in Parkinson's disease: A multimodal study using PET/CT, EEG, and BDNF
FENS Forum 2024
Opposite coding of competing rewards by VTA dopamine neurons
FENS Forum 2024
Oxytocin's role in behavioural prioritization: Examining competing social and food needs in mice
FENS Forum 2024
Role of testosterone and cortisol in formula car drivers for achieving high performance in a real competitive situation
FENS Forum 2024
Relationship of autonomic cardiac control and occipital theta power with competitive performance in elite freestyle snowboarders
FENS Forum 2024
Repetitive transcranial magnetic stimulation induced changes of N6-methyladenosine modified mRNAs
FENS Forum 2024
Role of dopaminergic neurons in aversive and appetitive motivated behaviour
FENS Forum 2024
Ruminative stress: How repetitive thoughts and stress interact in an experimental setting
FENS Forum 2024
Sequential appetite suppression by oral and visceral feedback to the brainstem
FENS Forum 2024
Signal integration and competition in a biophysical model of the substantia nigra pars reticulata
FENS Forum 2024
Stimulus contiguity determines context in appetitive extinction learning
FENS Forum 2024
Thalamic opioids from POMC satiety neurons gate sugar appetite
FENS Forum 2024
Toxic effects of environmentally-relevant exposure to polyethylene terephthalate (PET) micro and nanoparticles in zebrafish early development
FENS Forum 2024
Salience by Competitive and Recurrent Interactions: Bridging Neural Spiking and Computation in Visual Attention
Neuromatch 5