Brain Activity
brain activity
Dr. Ziad Nahas
Dr. Ziad Nahas (Interventional Psychiatry Lab) in the University of Minnesota Department of Psychiatry and Behavioral Sciences is seeking an outstanding candidate for a postdoctoral position to conduct and analyze the effects of neuromodulation on brain activity in mood disorders. Candidates should be passionate about advancing knowledge in the area of translational research of depressive disorders and other mental health conditions with a focus on invasive and non-invasive brain stimulation treatments. The position is available June 1, 2023, and funding is available for at least two years.
Dr. Ziad Nahas
Dr. Ziad Nahas (Interventional Psychiatry Lab) in the University of Minnesota Department of Psychiatry and Behavioral Sciences is seeking an outstanding candidate for a postdoctoral position to conduct and analyze the effects of neuromodulation on brain activity in mood disorders. Candidates should be passionate about advancing knowledge in the area of translational research of depressive disorders and other mental health conditions with a focus on invasive and non-invasive brain stimulation treatments. The position is available June 1, 2023, and funding is available for at least two years.
Axel Hutt
The National Institute for Computer Science and Control (INRIA) provides a postdoctoral fellowship on Mathematical modelling of neuronal EEG activity under brain stimulation. We are interested in developing neurostimulation techniques in order to improve the cure of patients suffering from mental disorders. To this end, our aim is to develop dynamic neural models and merging these data to experimentally observed data, such as EEG or BOLD responses. This merge may utilize diverse optimization techniques, such as data assimilation. The latter permits to estimate model parameters adaptively in non-stationary signals, i.e. online in time. A prominent example for a data assimilation technique is Kalman filtering. More detailed, we are looking for collaborators, who are interested in neural population models describing macroscopic brain activity in pathological brain states under neurostimulation. The mathematical analysis of such models typically yields important insights into the origin of the brain activity. Moreover, the merge with experimental data demands a certain understanding of data analysis techniques to prepare the experimental data and identify correctly good biomarkers. It would be advantageous if the candidate has some fundamental expertise in this respect. Finally, the perfect future collaborator has already some expertise in parameter estimation techniques, especially in data assimilation.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Characterising Representations of Goal Obstructiveness and Uncertainty Across Behavior, Physiology, and Brain Activity Through a Video Game Paradigm
The nature of emotions and their neural underpinnings remain debated. Appraisal theories such as the component process model propose that the perception and evaluation of events (appraisal) is the key to eliciting the range of emotions we experience. Here we study whether the framework of appraisal theories provides a clearer account for the differentiation of emotional episodes and their functional organisation in the brain. We developed a stealth game to manipulate appraisals in a systematic yet immersive way. The interactive nature of video games heightens self-relevance through the experience of goal-directed action or reaction, evoking strong emotions. We show that our manipulations led to changes in behaviour, physiology and brain activations.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Inducing short to medium neuroplastic effects with Transcranial Ultrasound Stimulation
Sound waves can be used to modify brain activity safely and transiently with unprecedented precision even deep in the brain - unlike traditional brain stimulation methods. In a series of studies in humans and non-human primates, I will show that Transcranial Ultrasound Stimulation (TUS) can have medium- to long-lasting effects. Multiple read-outs allow us to conclude that TUS can perturb neuronal tissues up to 2h after intervention, including changes in local and distributed brain network configurations, behavioural changes, task-related neuronal changes and chemical changes in the sonicated focal volume. Combined with multiple neuroimaging techniques (resting state functional Magnetic Resonance Imaging [rsfMRI], Spectroscopy [MRS] and task-related fMRI changes), this talk will focus on recent human TUS studies.
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Algonauts 2023 winning paper journal club (fMRI encoding models)
Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf
Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?
Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.
1.8 billion regressions to predict fMRI (journal club)
Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.
Estimating repetitive spatiotemporal patterns from resting-state brain activity data
Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.
Off the rails - how pathological patterns of whole brain activity emerge in epileptic seizures
In most brains across the animal kingdom, brain dynamics can enter pathological states that are recognisable as epileptic seizures. Yet usually, brain operate within certain constraints given through neuronal function and synaptic coupling, that will prevent epileptic seizure dynamics from emerging. In this talk, I will bring together different approaches to identifying how networks in the broadest sense shape brain dynamics. Using illustrative examples from intracranial EEG recordings, disorders characterised by molecular disruption of a single neurotransmitter receptor type, to single-cell recordings of whole-brain activity in the larval zebrafish, I will address three key questions - (1) how does the regionally specific composition of synaptic receptors shape ongoing physiological brain activity; (2) how can disruption of this regionally specific balance result in abnormal brain dynamics; and (3) which cellular patterns underly the transition into an epileptic seizure.
A possible role of the posterior alpha as a railroad switcher between dorsal and ventral pathways
Suppose you are on your favorite touchscreen device consciously and deliberately deciding emails to read or delete. In other words, you are consciously and intentionally looking, tapping, and swiping. Now suppose that you are doing this while neuroscientists are recording your brain activity. Eventually, the neuroscientists are familiar enough with your brain activity and behavior that they run an experiment with subliminal cues which reveals that your looking, tapping, and swiping seem to be determined by a random switch in your brain. You are not aware of it, or its impact on your decisions or movements. Would these predictions undermine your sense of free will? Some have argued that it should. Although this inference from unreflective and/or random intention mechanisms to free will skepticism, may seem intuitive at first, there are already objections to it. So, even if this thought experiment is plausible, it may not actually undermine our sense of free will.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Versatile treadmill system for measuring locomotion and neural activity in head-fixed mice
Here, we present a protocol for using a versatile treadmill system to measure locomotion and neural activity at high temporal resolution in head-fixed mice. We first describe the assembly of the treadmill system. We then detail surgical implantation of the headplate on the mouse skull, followed by habituation of mice to locomotion on the treadmill system. The system is compact, movable, and simple to synchronize with other data streams, making it ideal for monitoring brain activity in diverse behavioral frameworks. https://dx.doi.org/10.1016/j.xpro.2022.101701
Trial by trial predictions of subjective time from human brain activity
Our perception of time isn’t like a clock; it varies depending on other aspects of experience, such as what we see and hear in that moment. However, in everyday life, the properties of these simple features can change frequently, presenting a challenge to understanding real-world time perception based on simple lab experiments. We developed a computational model of human time perception based on tracking changes in neural activity across brain regions involved in sensory processing, using fMRI. By measuring changes in brain activity patterns across these regions, our approach accommodates the different and changing feature combinations present in natural scenarios, such as walking on a busy street. Our model reproduces people’s duration reports for natural videos (up to almost half a minute long) and, most importantly, predicts whether a person reports a scene as relatively shorter or longer–the biases in time perception that reflect how natural experience of time deviates from clock time
Disentangling neural correlates of consciousness and task relevance using EEG and fMRI
How does our brain generate consciousness, that is, the subjective experience of what it is like to see face or hear a sound? Do we become aware of a stimulus during early sensory processing or only later when information is shared in a wide-spread fronto-parietal network? Neural correlates of consciousness are typically identified by comparing brain activity when a constant stimulus (e.g., a face) is perceived versus not perceived. However, in most previous experiments, conscious perception was systematically confounded with post-perceptual processes such as decision-making and report. In this talk, I will present recent EEG and fMRI studies dissociating neural correlates of consciousness and task-related processing in visual and auditory perception. Our results suggest that consciousness emerges during early sensory processing, while late, fronto-parietal activity is associated with post-perceptual processes rather than awareness. These findings challenge predominant theories of consciousness and highlight the importance of considering task relevance as a confound across different neuroscientific methods, experimental paradigms and sensory modalities.
Canonical neural networks perform active inference
The free-energy principle and active inference have received a significant attention in the fields of neuroscience and machine learning. However, it remains to be established whether active inference is an apt explanation for any given neural network that actively exchanges with its environment. To address this issue, we show that a class of canonical neural networks of rate coding models implicitly performs variational Bayesian inference under a well-known form of partially observed Markov decision process model (Isomura, Shimazaki, Friston, Commun Biol, 2022). Based on the proposed theory, we demonstrate that canonical neural networks—featuring delayed modulation of Hebbian plasticity—can perform planning and adaptive behavioural control in the Bayes optimal manner, through postdiction of their previous decisions. This scheme enables us to estimate implicit priors under which the agent’s neural network operates and identify a specific form of the generative model. The proposed equivalence is crucial for rendering brain activity explainable to better understand basic neuropsychology and psychiatric disorders. Moreover, this notion can dramatically reduce the complexity of designing self-learning neuromorphic hardware to perform various types of tasks.
The functional connectome across temporal scales
The view of human brain function has drastically shifted over the last decade, owing to the observation that the majority of brain activity is intrinsic rather than driven by external stimuli or cognitive demands. Specifically, all brain regions continuously communicate in spatiotemporally organized patterns that constitute the functional connectome, with consequences for cognition and behavior. In this talk, I will argue that another shift is underway, driven by new insights from synergistic interrogation of the functional connectome using different acquisition methods. The human functional connectome is typically investigated with functional magnetic resonance imaging (fMRI) that relies on the indirect hemodynamic signal, thereby emphasizing very slow connectivity across brain regions. Conversely, more recent methodological advances demonstrate that fast connectivity within the whole-brain connectome can be studied with real-time methods such as electroencephalography (EEG). Our findings show that combining fMRI with scalp or intracranial EEG in humans, especially when recorded concurrently, paints a rich picture of neural communication across the connectome. Specifically, the connectome comprises both fast, oscillation-based connectivity observable with EEG, as well as extremely slow processes best captured by fMRI. While the fast and slow processes share an important degree of spatial organization, these processes unfold in a temporally independent manner. Our observations suggest that fMRI and EEG may be envisaged as capturing distinct aspects of functional connectivity, rather than intermodal measurements of the same phenomenon. Infraslow fluctuation-based and rapid oscillation-based connectivity of various frequency bands constitute multiple dynamic trajectories through a shared state space of discrete connectome configurations. The multitude of flexible trajectories may concurrently enable functional connectivity across multiple independent sets of distributed brain regions.
Keeping your Brain in Balance: the Ups and Downs of Homeostatic Plasticity (virtual)
Our brains must generate and maintain stable activity patterns over decades of life, despite the dramatic changes in circuit connectivity and function induced by learning and experience-dependent plasticity. How do our brains acheive this balance between opposing need for plasticity and stability? Over the past two decades, we and others have uncovered a family of “homeostatic” negative feedback mechanisms that are theorized to stabilize overall brain activity while allowing specific connections to be reconfigured by experience. Here I discuss recent work in which we demonstrate that individual neocortical neurons in freely behaving animals indeed have a homeostatic activity set-point, to which they return in the face of perturbations. Intriguingly, this firing rate homeostasis is gated by sleep/wake states in a manner that depends on the direction of homeostatic regulation: upward-firing rate homeostasis occurs selectively during periods of active wake, while downward-firing rate homeostasis occurs selectively during periods of sleep, suggesting that an important function of sleep is to temporally segregate bidirectional plasticity. Finally, we show that firing rate homeostasis is compromised in an animal model of autism spectrum disorder. Together our findings suggest that loss of homeostatic plasticity in some neurological disorders may render central circuits unable to compensate for the normal perturbations induced by development and learning.
Do we reason differently about affectively charged analogies? Insights from EEG research
Affectively charged analogies are commonly used in literature and art, but also in politics and argumentation. There are reasons to think we may process these analogies differently. Notably, analogical reasoning is a complex process that requires the use of cognitive resources, which are limited. In the presence of affectively charged content, some of these resources might be directed towards affective processing and away from analogical reasoning. To investigate this idea, I investigated effects of affective charge on differences in brain activity evoked by sound versus unsound analogies. The presentation will detail the methods and results for two such experiments, one in which participants saw analogies formed of neutral and negative words and one in which they were created by combining conditioned symbols. I will also briefly discuss future research aiming to investigate the effects of analogical reasoning on brain activity related to affective processing.
Interpersonal synchrony of body/brain, Solo & Team Flow
Flow is defined as an altered state of consciousness with excessive attention and enormous sense of pleasure, when engaged in a challenging task, first postulated by a psychologist, the late M. Csikszentmihayli. The main focus of this talk will be “Team Flow,” but there were two lines of previous studies in our laboratory as its background. First is inter-body and inter-brain coordination/synchrony between individuals. Considering various rhythmic echoing/synchronization phenomena in animal behavior, it could be regarded as the biological, sub-symbolic and implicit origin of social interactions. The second line of precursor research is on the state of Solo Flow in game playing. We employed attenuation of AEP (Auditory Evoked Potential) to task-irrelevant sound probes as an objective-neural indicator of such a Flow status, and found that; 1) Mutual link between the ACC & the TP is critical, and 2) overall, top-down influence is enhanced while bottom-up causality is attenuated. Having these as the background, I will present our latest study of Team Flow in game playing. We found that; 3) the neural correlates of Team Flow is distinctively different from those of Solo Flow nor of non-flow social, 4) the left medial temporal cortex seems to form an integrative node for Team Flow, receiving input related to Solo Flow state from the right PFC and input related to social state from the right IFC, and 5) Intra-brain (dis)similarity of brain activity well predicts (dis)similarity of skills/cognition as well as affinity for inter-brain coherence.
Maths, AI and Neuroscience meeting
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.
Brain circuit dynamics in Action and Sleep
Our group focuses on brain computation, physiology and evolution, with a particular focus on network dynamics, sleep (evolution and mechanistic underpinnings), cortical computation (through the study of ancestral cortices), and sensorimotor processing. This talk will describe our recent results on the remarkable camouflage behavior of cuttlefish (action) and on brain activity in REM and NonREM in lizards (sleep). Both topics will focus on aspects of circuit dynamics.
Brain circuit dynamics in Action and Sleep
Our group focuses on brain computation, physiology and evolution, with a particular focus on network dynamics, sleep (evolution and mechanistic underpinnings), cortical computation (through the study of ancestral cortices), and sensorimotor processing. This talk will describe our recent results on the remarkable camouflage behavior of cuttlefish (action) and on brain activity in REM and NonREM in lizards (sleep). Both topics will focus on aspects of circuit dynamics.
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
NMC4 Short Talk: Synchronization in the Connectome: Metastable oscillatory modes emerge from interactions in the brain spacetime network
The brain exhibits a rich repertoire of oscillatory patterns organized in space, time and frequency. However, despite ever more-detailed characterizations of spectrally-resolved network patterns, the principles governing oscillatory activity at the system-level remain unclear. Here, we propose that the transient emergence of spatially organized brain rhythms are signatures of weakly stable synchronization between subsets of brain areas, naturally occurring at reduced collective frequencies due to the presence of time delays. To test this mechanism, we build a reduced network model representing interactions between local neuronal populations (with damped oscillatory response at 40Hz) coupled in the human neuroanatomical network. Following theoretical predictions, weakly stable cluster synchronization drives a rich repertoire of short-lived (or metastable) oscillatory modes, whose frequency inversely depends on the number of units, the strength of coupling and the propagation times. Despite the significant degree of reduction, we find a range of model parameters where the frequencies of collective oscillations fall in the range of typical brain rhythms, leading to an optimal fit of the power spectra of magnetoencephalographic signals from 89 heathy individuals. These findings provide a mechanistic scenario for the spontaneous emergence of frequency-specific long-range phase-coupling observed in magneto- and electroencephalographic signals as signatures of resonant modes emerging in the space-time structure of the Connectome, reinforcing the importance of incorporating realistic time delays in network models of oscillatory brain activity.
“Mind reading” with brain scanners: Facts versus science fiction
Every thought is associated with a unique pattern of brain activity. Thus, in principle, it should be possible to use these activity patterns as "brain fingerprints" for different thoughts and to read out what a person is thinking based on their brain activity alone. Indeed, using machine learning considerable progress has been made in such "brainreading" in recent years. It is now possible to decode which image a person is viewing, which film sequence they are watching, which emotional state they are in or which intentions they hold in mind. This talk will provide an overview of the current state of the art in brain reading. It will also highlight the main challenges and limitations of this research field. For example, mathematical models are needed to cope with the high dimensionality of potential mental states. Furthermore, the ethical concerns raised by (often premature) commercial applications of brain reading will also be discussed.
Brain-Machine Interfaces: Beyond Decoding
A brain-machine interface (BMI) is a system that enables users to interact with computers and robots through the voluntary modulation of their brain activity. Such a BMI is particularly relevant as an aid for patients with severe neuromuscular disabilities, although it also opens up new possibilities in human-machine interaction for able-bodied people. Real-time signal processing and decoding of brain signals are certainly at the heart of a BMI. Yet, this does not suffice for subjects to operate a brain-controlled device. In the first part of my talk I will review some of our recent studies, most involving participants with severe motor disabilities, that illustrate additional principles of a reliable BMI that enable users to operate different devices. In particular, I will show how an exclusive focus on machine learning is not necessarily the solution as it may not promote subject learning. This highlights the need for a comprehensive mutual learning methodology that foster learning at the three critical levels of the machine, subject and application. To further illustrate that BMI is more than just decoding, I will discuss how to enhance subject learning and BMI performance through appropriate feedback modalities. Finally, I will show how these principles translate to motor rehabilitation, where in a controlled trial chronic stroke patients achieved a significant functional recovery after the intervention, which was retained 6-12 months after the end of therapy.
Characterising the brain representations behind variations in real-world visual behaviour
Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.
Imaging memory consolidation in wakefulness and sleep
New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.
Advances in Computational Psychiatry: Understanding (cognitive) control as a network process
The human brain is a complex organ characterized by heterogeneous patterns of interconnections. Non-invasive imaging techniques now allow for these patterns to be carefully and comprehensively mapped in individual humans, paving the way for a better understanding of how wiring supports cognitive processes. While a large body of work now focuses on descriptive statistics to characterize these wiring patterns, a critical open question lies in how the organization of these networks constrains the potential repertoire of brain dynamics. In this talk, I will describe an approach for understanding how perturbations to brain dynamics propagate through complex wiring patterns, driving the brain into new states of activity. Drawing on a range of disciplinary tools – from graph theory to network control theory and optimization – I will identify control points in brain networks and characterize trajectories of brain activity states following perturbation to those points. Finally, I will describe how these computational tools and approaches can be used to better understand the brain's intrinsic control mechanisms and their alterations in psychiatric conditions.
Learning in pain: probabilistic inference and (mal)adaptive control
Pain is a major clinical problem affecting 1 in 5 people in the world. There are unresolved questions that urgently require answers to treat pain effectively, a crucial one being how the feeling of pain arises from brain activity. Computational models of pain consider how the brain processes noxious information and allow mapping neural circuits and networks to cognition and behaviour. To date, they have generally have assumed two largely independent processes: perceptual and/or predictive inference, typically modelled as an approximate Bayesian process, and action control, typically modelled as a reinforcement learning process. However, inference and control are intertwined in complex ways, challenging the clarity of this distinction. I will discuss how they may comprise a parallel hierarchical architecture that combines pain inference, information-seeking, and adaptive value-based control. Finally, I will discuss whether and how these learning processes might contribute to chronic pain.
Do deep learning latent spaces resemble human brain representations?
In recent years, artificial neural networks have demonstrated human-like or super-human performance in many tasks including image or speech recognition, natural language processing (NLP), playing Go, chess, poker and video-games. One remarkable feature of the resulting models is that they can develop very intuitive latent representations of their inputs. In these latent spaces, simple linear operations tend to give meaningful results, as in the well-known analogy QUEEN-WOMAN+MAN=KING. We postulate that human brain representations share essential properties with these deep learning latent spaces. To verify this, we test whether artificial latent spaces can serve as a good model for decoding brain activity. We report improvements over state-of-the-art performance for reconstructing seen and imagined face images from fMRI brain activation patterns, using the latent space of a GAN (Generative Adversarial Network) model coupled with a Variational AutoEncoder (VAE). With another GAN model (BigBiGAN), we can decode and reconstruct natural scenes of any category from the corresponding brain activity. Our results suggest that deep learning can produce high-level representations approaching those found in the human brain. Finally, I will discuss whether these deep learning latent spaces could be relevant to the study of consciousness.
Rapid State Changes Account for Apparent Brain and Behavior Variability
Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.
Positive and negative feedback in seizure initiation
Seizure onset is a critically important brain state transition that has proved very difficult to predict accurately from recordings of brain activity. I will present new data acquired using a range of optogenetic and imaging tools to characterize exactly how cortical networks change in the build-up to a seizure. I will show how intermittent optogenetic stimulation ("active probing") reveals a latent change in dendritic excitability that is tightly correlated to the onset of seizure activity. This data relates back to old work from the 1980s suggesting a critical role in epileptic pathophysiology for dendritic plateau potentials. Our data show how the precipitous nature of the transition can be understood in terms of multiple, synergistic positive feedback mechanisms.
Brain dynamics underlying memory for continuous natural events
The world confronts our senses with a continuous stream of rapidly changing information. Yet, we experience life as a series of episodes or events, and in memory these pieces seem to become even further organized. How do we recall and give structure to this complex information? Recent studies have begun to examine these questions using naturalistic stimuli and behavior: subjects view audiovisual movies and then freely recount aloud their memories of the events. We find brain activity patterns that are unique to individual episodes, and which reappear during verbal recollection; robust generalization of these patterns across people; and memory effects driven by the structure of links between events in a narrative. These findings construct a picture of how we comprehend and recall real-world events that unfold continuously across time.
Student´s Oral Presentation III: Emotional State Classification Using Low-Cost Single-Channel Electroencephalography
Although electroencephalography (EEG) has been used in clinical and research studies for almost a century, recent technological advances have made the equipment and processing tools more accessible outside laboratory settings. These low-cost alternatives can achieve satisfactory results in experiments such as detecting event-related potentials and classifying cognitive states. In our research, we use low-cost single-channel EEG to classify brain activity during the presentation of images of opposite emotional valence from the OASIS database. Emotional classification has already been achieved using research-grade and commercial-grade equipment, but our approach pioneers the use of educational-grade equipment for said task. EEG data is collected with a Backyard Brains SpikerBox, a low-cost and open-source bioamplifier that can record a single-channel electric signal from a pair of electrodes placed on the scalp, and used to train machine learning classifiers.
Delineating Reward/Avoidance Decision Process in the Impulsive-compulsive Spectrum Disorders through a Probabilistic Reversal Learning Task
Impulsivity and compulsivity are behavioural traits that underlie many aspects of decision-making and form the characteristic symptoms of Obsessive Compulsive Disorder (OCD) and Gambling Disorder (GD). The neural underpinnings of aspects of reward and avoidance learning under the expression of these traits and symptoms are only partially understood. " "The present study combined behavioural modelling and neuroimaging technique to examine brain activity associated with critical phases of reward and loss processing in OCD and GD. " "Forty-two healthy controls (HC), forty OCD and twenty-three GD participants were recruited in our study to complete a two-session reinforcement learning (RL) task featuring a “probability switch (PS)” with imaging scanning. Finally, 39 HC (20F/19M, 34 yrs +/- 9.47), 28 OCD (14F/14M, 32.11 yrs ±9.53) and 16 GD (4F/12M, 35.53yrs ± 12.20) were included with both behavioural and imaging data available. The functional imaging was conducted by using 3.0-T SIEMENS MAGNETOM Skyra syngo MR D13C at Monash Biomedical Imaging. Each volume compromised 34 coronal slices of 3 mm thickness with 2000 ms TR and 30 ms TE. A total of 479 volumes were acquired for each participant in each session in an interleaved-ascending manner. " " The standard Q-learning model was fitted to the observed behavioural data and the Bayesian model was used for the parameter estimation. Imaging analysis was conducted using SPM12 (Welcome Department of Imaging Neuroscience, London, United Kingdom) in the Matlab (R2015b) environment. The pre-processing commenced with the slice timing, realignment, normalization to MNI space according to T1-weighted image and smoothing with a 8 mm Gaussian kernel. " " The frontostriatal brain circuit including the putamen and medial orbitofrontal (mOFC) were significantly more active in response to receiving reward and avoiding punishment compared to receiving an aversive outcome and missing reward at 0.001 with FWE correction at cluster level; While the right insula showed greater activation in response to missing rewards and receiving punishment. Compared to healthy participants, GD patients showed significantly lower activation in the left superior frontal and posterior cingulum at 0.001 for the gain omission. " " The reward prediction error (PE) signal was found positively correlated with the activation at several clusters expanding across cortical and subcortical region including the striatum, cingulate, bilateral insula, thalamus and superior frontal at 0.001 with FWE correction at cluster level. The GD patients showed a trend of decreased reward PE response in the right precentral extending to left posterior cingulate compared to controls at 0.05 with FWE correction. " " The aversive PE signal was negatively correlated with brain activity in regions including bilateral thalamus, hippocampus, insula and striatum at 0.001 with FWE correction. Compared with the control group, GD group showed an increased aversive PE activation in the cluster encompassing right thalamus and right hippocampus, and also the right middle frontal extending to the right anterior cingulum at 0.005 with FWE correction. " " Through the reversal learning task, the study provided a further support of the dissociable brain circuits for distinct phases of reward and avoidance learning. Also, the OCD and GD is characterised by aberrant patterns of reward and avoidance processing.
A paradoxical kind of sleep In Drosophila melanogaster
The dynamic nature of sleep in most animals suggests distinct stages which serve different functions. Genetic sleep induction methods in animal models provide a powerful way to disambiguate these stages and functions, although behavioural methods alone are insufficient to accurately identify what kind of sleep is being engaged. In Drosophila, activation of the dorsal fan-shaped body (dFB) promotes sleep, but it remains unclear what kind of sleep this is, how the rest of the fly brain is behaving, or if any specific sleep functions are being achieved. Here, we developed a method to record calcium activity from thousands of neurons across a volume of the fly brain during dFB-induced sleep, and we compared this to the effects of a sleep-promoting drug. We found that drug-induced spontaneous sleep decreased brain activity and connectivity, whereas dFB sleep was not different from wakefulness. Paradoxically, dFB-induced sleep was found to be even deeper than drug- induced sleep. When we probed the sleeping fly brain with salient visual stimuli, we found that the activity of visually-responsive neurons was blocked by dFB activation, confirming a disconnect from the external environment. Prolonged optogenetic dFB activation nevertheless achieved a significant sleep function, by correcting visual attention defects brought on by sleep deprivation. These results suggest that dFB activation promotes a distinct form of sleep in Drosophila, where brain activity and connectivity remain similar to wakefulness, but responsiveness to external sensory stimuli is profoundly suppressed.
Walking elicits global brain activity in adult Drosophila
COSYNE 2022
Walking elicits global brain activity in adult Drosophila
COSYNE 2022
Whole-brain activity patterns underlying uninstructed behavioral switching in mice
COSYNE 2025
Comparison of learning effects between on-demand and face-to-face classes from the viewpoint of brain activity
FENS Forum 2024
Different somatosensory brain activity after high and low frequency rTMS in non-human primate model of central post-stroke pain
FENS Forum 2024
How does motor expertise shape the brain activity?
FENS Forum 2024
A hair-thin path to the deep brain activity of an awake mouse
FENS Forum 2024
Identifying rhythmic episodes in source-level electromagnetic brain activity
FENS Forum 2024
Intuitive access to spatially linked brain activity and transcriptomic data using BrainTrawler
FENS Forum 2024
Modulation of brain activity by environmental design: A study using EEG and virtual reality
FENS Forum 2024
Is phonetic learning with peers more efficient than learning individually? An investigation of behavioral performance and electrical brain activity
FENS Forum 2024
Topographically organized sensory-motor activity of dorsal raphe and its forebrain innervations modulate resting-state and sensory-evoked forebrain activity and animal behavior
FENS Forum 2024
Whole-brain activity patterns underlying uninstructed behavioral switching in mice
FENS Forum 2024
Idiosyncratic Relation Between Human Brain Activity and Behavior
Neuromatch 5