SPECT
Latest
Consciousness at the edge of chaos
Over the last 20 years, neuroimaging and electrophysiology techniques have become central to understanding the mechanisms that accompany loss and recovery of consciousness. Much of this research is performed in the context of healthy individuals with neurotypical brain dynamics. Yet, a true understanding of how consciousness emerges from the joint action of neurons has to account for how severely pathological brains, often showing phenotypes typical of unconsciousness, can nonetheless generate a subjective viewpoint. In this presentation, I will start from the context of Disorders of Consciousness and will discuss recent work aimed at finding generalizable signatures of consciousness that are reliable across a spectrum of brain electrophysiological phenotypes focusing in particular on the notion of edge-of-chaos criticality.
Continuity and segmentation - two ends of a spectrum or independent processes?
Developmental and evolutionary perspectives on thalamic function
Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.
Recent views on pre-registration
A discussion on some recent perspectives on pre-registration, which has become a growing trend in the past few years. This is not just limited to neuroimaging, and it applies to most scientific fields. We will start with this overview editorial by Simmons et al. (2021): https://faculty.wharton.upenn.edu/wp-content/uploads/2016/11/34-Simmons-Nelson-Simonsohn-2021a.pdf, and also talk about a more critical perspective by Pham & Oh (2021): https://www.researchgate.net/profile/Michel-Pham/publication/349545600_Preregistration_Is_Neither_Sufficient_nor_Necessary_for_Good_Science/links/60fb311e2bf3553b29096aa7/Preregistration-Is-Neither-Sufficient-nor-Necessary-for-Good-Science.pdf. I would like us to discuss the pros and cons of pre-registration, and if we have time, I may do a demonstration of how to perform a pre-registration through the Open Science Framework.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
What it’s like is all there is: The value of Consciousness
Over the past thirty years or so, cognitive neuroscience has made spectacular progress understanding the biological mechanisms of consciousness. Consciousness science, as this field is now sometimes called, was not only inexistent thirty years ago, but its very name seemed like an oxymoron: how can there be a science of consciousness? And yet, despite this scepticism, we are now equipped with a rich set of sophisticated behavioural paradigms, with an impressive array of techniques making it possible to see the brain in action, and with an ever-growing collection of theories and speculations about the putative biological mechanisms through which information processing becomes conscious. This is all good and fine, even promising, but we also seem to have thrown the baby out with the bathwater, or at least to have forgotten it in the crib: consciousness is not just mechanisms, it’s what it feels like. In other words, while we know thousands of informative studies about access-consciousness, we have little in the way of phenomenal consciousness. But that — what it feels like — is truly what “consciousness” is about. Understanding why it feels like something to be me and nothing (panpsychists notwithstanding) for a stone to be a stone is what the field has always been after. However, while it is relatively easy to study access-consciousness through the contrastive approach applied to reports, it is much less clear how to study phenomenology, its structure and its function. Here, I first overview work on what consciousness does (the "how"). Next, I ask what difference feeling things makes and what function phenomenology might play. I argue that subjective experience has intrinsic value and plays a functional role in everything that we do.
Pharmacological exploitation of neurotrophins and their receptors to develop novel therapeutic approaches against neurodegenerative diseases and brain trauma
Neurotrophins (NGF, BDNF, NT-3) are endogenous growth factors that exert neuroprotective effects by preventing neuronal death and promoting neurogenesis. They act by binding to their respective high-affinity, pro-survival receptors TrkA, TrkB or TrkC, as well as to p75NTR death receptor. While these molecules have been shown to significantly slow or prevent neurodegeneration, their reduced bioavailability and inability to penetrate the blood-brain-barrier limit their use as potential therapeutics. To bypass these limitations, our research team has developed and patented small-sized, lipophilic compounds which selectively resemble neurotrophins’ effects, presenting preferable pharmacological properties and promoting neuroprotection and repair against neurodegeneration. In addition, the combination of these molecules with 3D cultured human neuronal cells, and their targeted delivery in the brain ventricles through soft robotic systems, could offer novel therapeutic approaches against neurodegenerative diseases and brain trauma.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Neurobiological Pathways to Tau-dependent Pathology: Perspectives from flies to humans
SWEBAGS conference 2024: The involvement of the striatum in autism spectrum disorder
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Decision and Behavior
This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
How do we sleep?
There is no consensus on if sleep is for the brain, body or both. But the difference in how we feel following disrupted sleep or having a good night of continuous sleep is striking. Understanding how and why we sleep will likely give insights into many aspects of health. In this talk I will outline our recent work on how the prefrontal cortex can signal to the hypothalamus to regulate sleep preparatory behaviours and sleep itself, and how other brain regions, including the ventral tegmental area, respond to psychosocial stress to induce beneficial sleep. I will also outline our work on examining the function of the glymphatic system, and whether clearance of molecules from the brain is enhanced during sleep or wakefulness.
Feedback-induced dispositional changes in risk preferences
Contrary to the original normative decision-making standpoint, empirical studies have repeatedly reported that risk preferences are affected by the disclosure of choice outcomes (feedback). Although no consensus has yet emerged regarding the properties and mechanisms of this effect, a widespread and intuitive hypothesis is that repeated feedback affects risk preferences by means of a learning effect, which alters the representation of subjective probabilities. Here, we ran a series of seven experiments (N= 538), tailored to decipher the effects of feedback on risk preferences. Our results indicate that the presence of feedback consistently increases risk-taking, even when the risky option is economically less advantageous. Crucially, risk-taking increases just after the instructions, before participants experience any feedback. These results challenge the learning account, and advocate for a dispositional effect, induced by the mere anticipation of feedback information. Epistemic curiosity and regret avoidance may drive this effect in partial and complete feedback conditions, respectively.
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Beyond Homogeneity: Characterizing Brain Disorder Heterogeneity through EEG and Normative Modeling
Electroencephalography (EEG) has been thoroughly studied for decades in psychiatry research. Yet its integration into clinical practice as a diagnostic/prognostic tool remains unachieved. We hypothesize that a key reason is the underlying patient's heterogeneity, overlooked in psychiatric EEG research relying on a case-control approach. We combine HD-EEG with normative modeling to quantify this heterogeneity using two well-established and extensively investigated EEG characteristics -spectral power and functional connectivity- across a cohort of 1674 patients with attention-deficit/hyperactivity disorder, autism spectrum disorder, learning disorder, or anxiety, and 560 matched controls. Normative models showed that deviations from population norms among patients were highly heterogeneous and frequency-dependent. Deviation spatial overlap across patients did not exceed 40% and 24% for spectral and connectivity, respectively. Considering individual deviations in patients has significantly enhanced comparative analysis, and the identification of patient-specific markers has demonstrated a correlation with clinical assessments, representing a crucial step towards attaining precision psychiatry through EEG.
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Combined electrophysiological and optical recording of multi-scale neural circuit dynamics
This webinar will showcase new approaches for electrophysiological recordings using our silicon neural probes and surface arrays combined with diverse optical methods such as wide-field or 2-photon imaging, fiber photometry, and optogenetic perturbations in awake, behaving mice. Multi-modal recording of single units and local field potentials across cortex, hippocampus and thalamus alongside calcium activity via GCaMP6F in cortical neurons in triple-transgenic animals or in hippocampal astrocytes via viral transduction are brought to bear to reveal hitherto inaccessible and under-appreciated aspects of coordinated dynamics in the brain.
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
Roles of inhibition in stabilizing and shaping the response of cortical networks
Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Time perception in film viewing as a function of film editing
Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.
Epileptic micronetworks and their clinical relevance
A core aspect of clinical epileptology revolves around relating epileptic field potentials to underlying neural sources (e.g. an “epileptogenic focus”). Yet still, how neural population activity relates to epileptic field potentials and ultimately clinical phenomenology, remains far from being understood. After a brief overview on this topic, this seminar will focus on unpublished work, with an emphasis on seizure-related focal spreading depression. The presented results will include hippocampal and neocortical chronic in vivo two-photon population imaging and local field potential recordings of epileptic micronetworks in mice, in the context of viral encephalitis or optogenetic stimulation. The findings are corroborated by invasive depth electrode recordings (macroelectrodes and BF microwires) in epilepsy patients during pre-surgical evaluation. The presented work carries general implications for clinical epileptology, and basic epilepsy research.
Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.
Dyslexia, Rhythm, Language and the Developing Brain
Recent insights from auditory neuroscience provide a new perspective on how the brain encodes speech. Using these recent insights, I will provide an overview of key factors underpinning individual differences in children’s development of language and phonology, providing a context for exploring atypical reading development (dyslexia). Children with dyslexia are relatively insensitive to acoustic cues related to speech rhythm patterns. This lack of rhythmic sensitivity is related to the atypical neural encoding of rhythm patterns in speech by the brain. I will describe our recent data from infants as well as children, demonstrating developmental continuity in the key neural variables.
Of glia and macrophages, signaling hubs in development and homeostasis
We are interested in the biology of macrophages, which represent the first line of defense against pathogens. In Drosophila, the embryonic hemocytes arise from the mesoderm whereas glial cells arise from multipotent precursors in the neurogenic region. These cell types represent, respectively, the macrophages located outside and within the nervous system (similar to vertebrate microglia). Thus, despite their different origin, hemocytes and glia display common functions. In addition, both cell types express the Glide/Gcm transcription factor, which plays an evolutionarily conserved role as an anti-inflammatory factor. Moreover, embryonic hemocytes play an evolutionarily conserved and fundamental role in development. The ability to migrate and to contact different tissues/organs most likely allow macrophages to function as signaling hubs. The function of macrophages beyond the recognition of the non-self calls for revisiting the biology of these heterogeneous and plastic cells in physiological and pathological conditions across evolution.
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Seizure control by electrical stimulation: parameters and mechanisms
Seizure suppression by deep brain stimulation (DBS) applies high frequency stimulation (HFS) to grey matter to block seizures. In this presentation, I will present the results of a different method that employs low frequency stimulation (LFS) (1 to 10Hz) of white matter tracts to prevent seizures. The approach has been shown to be effective in the hippocampus by stimulating the ventral and dorsal hippocampal commissure in both animal and human studies respectively for mesial temporal lobe seizures. A similar stimulation paradigm has been shown to be effective at controlling focal cortical seizures in rats with corpus callosum stimulation. This stimulation targets the axons of the corpus callosum innervating the focal zone at low frequencies (5 to 10Hz) and has been shown to significantly reduce both seizure and spike frequency. The mechanisms of this suppression paradigm have been elucidated with in-vitro studies and involve the activation of two long-lasting inhibitory potentials GABAB and sAHP. LFS mechanisms are similar in both hippocampus and cortical brain slices. Additionally, the results show that LFS does not block seizures but rather decreases the excitability of the tissue to prevent seizures. Three methods of seizure suppression, LFS applied to fiber tracts, HFS applied to focal zone and stimulation of the anterior nucleus of the thalamus (ANT) were compared directly in the same animal in an in-vivo epilepsy model. The results indicate that LFS generated a significantly higher level of suppression, indicating LFS of white matter tract could be a useful addition as a stimulation paradigm for the treatment of epilepsy.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Using Adversarial Collaboration to Harness Collective Intelligence
There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Piecing together the puzzle of emotional consciousness
Conscious emotional experiences are very rich in their nature, and can encompass anything ranging from the most intense panic when facing immediate threat, to the overwhelming love felt when meeting your newborn. It is then no surprise that capturing all aspects of emotional consciousness, such as intensity, valence, and bodily responses, into one theory has become the topic of much debate. Key questions in the field concern how we can actually measure emotions and which type of experiments can help us distill the neural correlates of emotional consciousness. In this talk I will give a brief overview of theories of emotional consciousness and where they disagree, after which I will dive into the evidence proposed to support these theories. Along the way I will discuss to what extent studying emotional consciousness is ‘special’ and will suggest several tools and experimental contrasts we have at our disposal to further our understanding on this intriguing topic.
Inducing short to medium neuroplastic effects with Transcranial Ultrasound Stimulation
Sound waves can be used to modify brain activity safely and transiently with unprecedented precision even deep in the brain - unlike traditional brain stimulation methods. In a series of studies in humans and non-human primates, I will show that Transcranial Ultrasound Stimulation (TUS) can have medium- to long-lasting effects. Multiple read-outs allow us to conclude that TUS can perturb neuronal tissues up to 2h after intervention, including changes in local and distributed brain network configurations, behavioural changes, task-related neuronal changes and chemical changes in the sonicated focal volume. Combined with multiple neuroimaging techniques (resting state functional Magnetic Resonance Imaging [rsfMRI], Spectroscopy [MRS] and task-related fMRI changes), this talk will focus on recent human TUS studies.
Neuromodulation of subjective experience
Many psychoactive substances are used with the aim of altering experience, e.g. as analgesics, antidepressants or antipsychotics. These drugs act on specific receptor systems in the brain, including the opioid, serotonergic and dopaminergic systems. In this talk, I will summarise human drug studies targeting opioid receptors and their role for human experience, with focus on the experience of pain, stress, mood, and social connection. Opioids are only indicated for analgesia, due to their potential to cause addiction. When these regulations occurred, other known effects were relegated to side effects. This may be the cause of the prevalent myth that opioids are the most potent painkillers, despite evidence from head-to-head trials, Cochrane reviews and network meta-analyses that opioids are not superior to non-opioid analgesics in the treatment of acute or chronic non-cancer pain. However, due to the variability and diversity of opioid effects across contexts and experiences, some people under some circumstances may indeed benefit from prolonged treatment. I will present data on individual differences in opioid effects due to participant sex and stress induction. Understanding the effects of these commonly used medications on other aspects of the human experience is important to ensure correct use and to prevent unnecessary pain and addiction risk.
Neuroinflammation in Epilepsy: what have we learned from human brain tissue specimens ?
Epileptogenesis is a gradual and dynamic process leading to difficult-to-treat seizures. Several cellular, molecular, and pathophysiologic mechanisms, including the activation of inflammatory processes. The use of human brain tissue represents a crucial strategy to advance our understanding of the underlying neuropathology and the molecular and cellular basis of epilepsy and related cognitive and behavioral comorbidities, The mounting evidence obtained during the past decade has emphasized the critical role of inflammation in the pathophysiological processes implicated in a large spectrum of genetic and acquired forms of focal epilepsies. Dissecting the cellular and molecular mediators of the pathological immune responses and their convergent and divergent mechanisms, is a major requisite for delineating their role in the establishment of epileptogenic networks. The role of small regulatory molecules involved in the regulation of specific pro- and anti-inflammatory pathways and the crosstalk between neuroinflammation and oxidative stress will be addressed. The observations supporting the activation of both innate and adaptive immune responses in human focal epilepsy will be discussed and elaborated, highlighting specific inflammatory pathways as potential targets for antiepileptic, disease-modifying therapeutic strategies.
Use of brain imaging data to improve prescriptions of psychotropic drugs - Examples of ketamine in depression and antipsychotics in schizophrenia
The use of molecular imaging, particularly PET and SPECT, has significantly transformed the treatment of schizophrenia with antipsychotic drugs since the late 1980s. It has offered insights into the links between drug target engagement, clinical effects, and side effects. A therapeutic window for receptor occupancy is established for antipsychotics, yet there is a divergence of opinions regarding the importance of blood levels, with many downplaying their significance. As a result, the role of therapeutic drug monitoring (TDM) as a personalized therapy tool is often underrated. Since molecular imaging of antipsychotics has focused almost entirely on D2-like dopamine receptors and their potential to control positive symptoms, negative symptoms and cognitive deficits are hardly or not at all investigated. Alternative methods have been introduced, i.e. to investigate the correlation between approximated receptor occupancies from blood levels and cognitive measures. Within the domain of antidepressants, and specifically regarding ketamine's efficacy in depression treatment, there is limited comprehension of the association between plasma concentrations and target engagement. The measurement of AMPA receptors in the human brain has added a new level of comprehension regarding ketamine's antidepressant effects. To ensure precise prescription of psychotropic drugs, it is vital to have a nuanced understanding of how molecular and clinical effects interact. Clinician scientists are assigned with the task of integrating these indispensable pharmacological insights into practice, thereby ensuring a rational and effective approach to the treatment of mental health disorders, signaling a new era of personalized drug therapy mechanisms that promote neuronal plasticity not only under pathological conditions, but also in the healthy aging brain.
Brain Connectivity Workshop
Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.
Self as Processes (BACN Mid-career Prize Lecture 2023)
An understanding of the self helps explain not only human thoughts, feelings, attitudes but also many aspects of everyday behaviour. This talk focuses on a viewpoint - self as processes. This viewpoint emphasizes the dynamics of the self that best connects with the development of the self over time and its realist orientation. We are combining psychological experiments and data mining to comprehend the stability and adaptability of the self across various populations. In this talk, I draw on evidence from experimental psychology, cognitive neuroscience, and machine learning approaches to demonstrate why and how self-association affects cognition and how it is modulated by various social experiences and situational factors
Anticipating behaviour through working memory (BACN Early Career Prize Lecture 2023)
Working memory is about the past but for the future. Adopting such a future-focused perspective shifts the narrative of working memory as a limited-capacity storage system to working memory as an anticipatory buffer that helps us prepare for potential and sequential upcoming behaviour. In my talk, I will present a series of our recent studies that have started to reveal emerging principles of a working memory that looks forward – highlighting, amongst others, how selective attention plays a vital role in prioritising internal contents for behaviour, and the bi-directional links between visual working memory and action. These studies show how studying the dynamics of working memory, selective attention, and action together paves way for an integrated understanding of how mind serves behaviour.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
Attending to the ups and downs of Lewy body dementia: An exploration of cognitive fluctuations
Dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD) share similarities in pathology and clinical presentation and come under the umbrella term of Lewy body dementias (LBD). Fluctuating cognition is a key symptom in LBD and manifests as altered levels of alertness and attention, with a marked difference between best and worst performance. Cognition and alertness can change over seconds or minutes to hours and days of obtundation. Cognitive fluctuations can have significant impacts on the quality of life of people with LBD as well as potentially contribute to the exacerbation of other transient symptoms including, for example, hallucinations and psychosis as well as making it difficult to measure cognitive effect size benefits in clinical trials of LBD. However, this significant symptom in LBD is poorly understood. In my presentation I will discuss the phenomenology of cognitive fluctuations, how we can measure it clinically and limitations of these approaches. I will then outline the work of our group and others which has been focussed on unpicking the aetiological basis of cognitive fluctuations in LBD using a variety of imaging approaches (e.g. SPECT, sMRI, fMRI and EEG). I will then briefly explore future research directions.
Freeze or flee ? New insights from rodent models of autism
Individuals afflicted with certain types of autism spectrum disorder often exhibit impaired cognitive function alongside enhanced emotional symptoms and mood lability. However, current understanding of the pathogenesis of autism and intellectual disabilities is based primarily on studies in the hippocampus and cortex, brain areas involved in cognitive function. But, these disorders are also associated with strong emotional symptoms, which are likely to involve changes in the amygdala and other brain areas. In this talk I will highlight these issues by presenting analyses in rat models of ASD/ID lacking Nlgn3 and Frm1 (causing Fragile X Syndrome). In addition to identifying new circuit and cellular alterations underlying divergent patterns of fear expression, these findings also suggest novel therapeutic strategies.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Present and Future of the diagnostic work-up multiple sclerosis: the imaging perspective
The development of visual experience
Vision and visual cognition is experience-dependent with likely multiple sensitive periods, but we know very little about statistics of visual experience at the scale of everyday life and how they might change with development. By traditional assumptions, the world at the massive scale of daily life presents pretty much the same visual statistics to all perceivers. I will present an overview our work on ego-centric vision showing that this is not the case. The momentary image received at the eye is spatially selective, dependent on the location, posture and behavior of the perceiver. If a perceiver’s location, possible postures and/or preferences for looking at some kinds of scenes over others are constrained, then their sampling of images from the world and thus the visual statistics at the scale of daily life could be biased. I will present evidence with respect to both low-level and higher level visual statistics about the developmental changes in the visual input over the first 18 months post-birth.
Consciousness in the age of mechanical minds
We are now clearly entering a new age in our relationship with machines. The power of AI natural language processors and image generators has rapidly exceeded the expectations of even those who developed them. Serious questions are now being asked about the extent to which machines could become — or perhaps already are — sentient or conscious. Do AI machines understand the instructions they are given and the answers they provide? In this talk I will consider the prospects for conscious machines, by which I mean machines that have feelings, know about their own existence, and about ours. I will suggest that the recent focus on information processing in models of consciousness, in which the brain is treated as a kind of digital computer, have mislead us about the nature of consciousness and how it is produced in biological systems. Treating the brain as an energy processing system is more likely to yield answers to these fundamental questions and help us understand how and when machines might become minds.
Why spikes?
On a fast timescale, neurons mostly interact by short, stereotypical electrical impulses or spikes. Why? A common answer is that spikes are useful for long-distance communication, to avoid alterations while traveling along axons. But as it turns out, spikes are seen in many places outside neurons: in the heart, in muscles, in plants and even in protists. From these examples, it appears that action potentials mediate some form of coordinated action, a timed event. From this perspective, spikes should not be seen simply as noisy implementations of underlying continuous signals (a sort of analog-to-digital conversion), but rather as events or actions. I will give a number of examples of functional spike-based interactions in living systems.
Auditory input to the basal ganglia; Deep brain stimulation and action-stopping: A cognitive neuroscience perspective on the contributions of fronto-basal ganglia circuits to inhibitory control
On Thursday, May 25th we will host Darcy Diesburg and Mark Richardson. Darcy Diesburg, PhD, is a post-doctoral research fellow at Brown University. She will tell us about “Deep brain stimulation and action-stopping: A cognitive neuroscience perspective on the contributions of fronto-basal ganglia circuits to inhibitory control”. Mark Richardson, MD, PhD, is the Director of Functional Neurosurgery at the Massachusetts General Hospital, Charles Pappas Associate Professor of Neurosciences at Harvard Medical School and Visiting Associate Professor of Brain and Cognitive Sciences at MIT. Beside his scientific presentation on “Auditory input to the basal ganglia”, he will give us a glimpse at the “Person behind the science”. The talks will be followed by a shared discussion. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!
A new approach to inferring the eigenspectra of high-dimensional neural representations
COSYNE 2022
Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types
COSYNE 2022
Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types
COSYNE 2022
Regionally distinct striatal circuits support broadly opponent aspects of action suppression and production
COSYNE 2022
Regionally distinct striatal circuits support broadly opponent aspects of action suppression and production
COSYNE 2022
Sparse coding predicts a spectral bias in the development of V1 receptive fields
COSYNE 2022
Sparse coding predicts a spectral bias in the development of V1 receptive fields
COSYNE 2022
Thalamic head-direction cells are organized irrespective of their inputs
COSYNE 2022
Thalamic head-direction cells are organized irrespective of their inputs
COSYNE 2022
A White Matter Ephaptic Coupling Model for 1/f Spectral Densities
COSYNE 2022
A White Matter Ephaptic Coupling Model for 1/f Spectral Densities
COSYNE 2022
Detecting rhythmic spiking through the power spectra of point process model residuals
COSYNE 2023
Eigenvalue spectral properties of sparse random matrices for neural networks
COSYNE 2023
Machine learning of functional network and molecular mechanisms in autism spectrum disorder subtypes
COSYNE 2023
The scale-invariant covariance spectrum of brain-wide activity in larval zebrafish
COSYNE 2023
Spectral learning of Bernoulli latent dynamical system models for decision-making
COSYNE 2023
Thalamic head-direction cells are organized irrespective of their inputs
COSYNE 2023
On the benefits of analog spikes: an information efficiency perspective
COSYNE 2025
Covariance spectrum in nonlinear recurrent neural networks and transition to chaos
COSYNE 2025
Envelope representations substantially enhance the predictive power of spectrotemporal receptive models in the human auditory cortex
COSYNE 2025
Humans can use positive and negative spectrotemporal correlations to detect rising and falling pitch
COSYNE 2025
Prospective and retrospective coding in cortical neurons
COSYNE 2025
A prospective code for value in the serotonin system
COSYNE 2025
Spectral analysis of representational similarity with limited neurons
COSYNE 2025
The anti-reward center in Autism Spectrum Disorders (ASDs)
FENS Forum 2024
Brain activation patterns in patients with autism spectrum disorder in pain-related perspective-taking: Relationship with interoceptive accuracy
FENS Forum 2024
Changes in frequency and amplitude of hippocampal theta modulate firing across neurons with respect to behavioural context
FENS Forum 2024
Changes in striatal spiny projection neurons’ properties and circuitry in a mouse model of autism spectrum disorder with cholinergic interneuron dysfunction
FENS Forum 2024
Characterization of the transcriptional landscape of endogenous retroviruses at the fetal-maternal interface in a mouse model of autism spectrum disorder
FENS Forum 2024
Characterizing age-related cognitive-motor interactions in individuals with and without autism spectrum disorder using mobile brain-body imaging (MoBI)
FENS Forum 2024
Characterizing double-negative neuromyelitis optica spectrum disorder: A meta-analysis
FENS Forum 2024
Clinical and genetic spectrum of childhood-onset leukodystrophies: Findings from an in-house targeted gene panel study
FENS Forum 2024
Cognitive improvement up to 4 years after cochlear implantation in older adults: A prospective longitudinal study using the RBANS-H
FENS Forum 2024
Communication through social touch in autism spectrum condition
FENS Forum 2024
Deciphering the neurodevelopmental role of the brain secretome in Autism Spectrum Disorder
FENS Forum 2024
Development and intergenerational perspectives on corticolimbic brain structures during childhood
FENS Forum 2024
Effect of ENERGI in valproate-induced animal with autism spectrum disorder
FENS Forum 2024
Electrophysiologic, transcriptomic, and morphologic plasticity of spinal inhibitory neurons to decipher atypical mechanosensory perception in Autism Spectrum Disorder
FENS Forum 2024
Employing CBF-SPECT imaging to examine the activation of brain regions across different stages of helping behavior in mice
FENS Forum 2024
Why spikes? A synaptic transmission perspective
Bernstein Conference 2024
SPECT coverage
90 items