Diffusion
diffusion
Probing White Matter Microstructure With Diffusion-Weighted MRI: Techniques and Applications in ADRD
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Virtual Brain Twins for Brain Medicine and Epilepsy
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.
Unique features of oxygen delivery to the mammalian retina
Like all neural tissue, the retina has a high metabolic demand, and requires a constant supply of oxygen. Second and third order neurons are supplied by the retinal circulation, whose characteristics are similar to brain circulation. However, the photoreceptor region, which occupies half of the retinal thickness, is avascular, and relies on diffusion of oxygen from the choroidal circulation, whose properties are very different, as well as the retinal circulation. By fitting diffusion models to oxygen measurements made with oxygen microelectrodes, it is possible to understand the relative roles of the two circulations under normal conditions of light and darkness, and what happens if the retina is detached or the retinal circulation is occluded. Most of this work has been done in vivo in rat, cat, and monkey, but recent work in the isolated mouse retina will also be discussed.
Does subjective time interact with the heart rate?
Decades of research have investigated the relationship between perception of time and heart rate with often mixed results. In search of such a relationship, I will present my far journey between two projects: from time perception in the realistic VR experience of crowded subway trips in the order of minutes (project 1); to the perceived duration of sub-second white noise tones (project 2). Heart rate had multiple concurrent relationships with subjective temporal distortions for the sub-second tones, while the effects were lacking or weak for the supra-minute subway trips. What does the heart have to do with sub-second time perception? We addressed this question with a cardiac drift-diffusion model, demonstrating the sensory accumulation of temporal evidence as a function of heart rate.
Unravelling bistable perception from human intracranial recordings
Discovering dynamical patterns from high fidelity timeseries is typically a challenging task. In this talk, the timeseries data consist of neural recordings taken from the auditory cortex of human subjects who listened to sequences of repeated triplets of tones and reported their perception by pressing a button. Subjects reported spontaneous alternations between two auditory perceptual states (1-stream and 2-streams). We discuss a data-driven method, which leverages time-delayed coordinates, diffusion maps, and dynamic mode decomposition, to identify neural features that correlated with subject-reported switching between perceptual states.
Neural signature for accumulated evidence underlying temporal decisions
Cognitive models of timing often include a pacemaker analogue whose ticks are accumulated to form an internal representation of time, and a threshold that determines when a target duration has elapsed. However, clear EEG manifestations of these abstract components have not yet been identified. We measured the EEG of subjects while they performed a temporal bisection task in which they were requested to categorize visual stimuli as short or long in duration. We report an ERP component whose amplitude depends monotonically on the stimulus duration. The relation of the ERP amplitude and stimulus duration can be captured by a simple model, adapted from a known drift-diffusion model for time perception. It includes a noisy accumulator that starts with the stimulus onset and a threshold. If the threshold is reached during stimulus presentation, the stimulus is categorized as "long", otherwise the stimulus is categorized as "short". At the stimulus offset, a response proportional to the distance to the threshold is emitted. This simple model has two parameters that fit both the behavior and ERP amplitudes recorded in the task. Two subsequent experiments replicate and extend this finding to another modality (touch) as well as to different time ranges (subsecond and suprasecond), establishing the described ERP component as a useful handle on the cognitive processes involved in temporal decisions.
Mice identify subgoals locations through an action-driven mapping process
Mammals instinctively explore and form mental maps of their spatial environments. Models of cognitive mapping in neuroscience mostly depict map-learning as a process of random or biased diffusion. In practice, however, animals explore spaces using structured, purposeful, sensory-guided actions. We have used threat-evoked escape behavior in mice to probe the relationship between ethological exploratory behavior and abstract spatial cognition. First, we show that in arenas with obstacles and a shelter, mice spontaneously learn efficient multi-step escape routes by memorizing allocentric subgoal locations. Using closed-loop neural manipulations to interrupt running movements during exploration, we next found that blocking runs targeting an obstacle edge abolished subgoal learning. We conclude that mice use an action-driven learning process to identify subgoals, and these subgoals are then integrated into an allocentric map-like representation. We suggest a conceptual framework for spatial learning that is compatible with the successor representation from reinforcement learning and sensorimotor enactivism from cognitive science.
NMC4 Short Talk: Transient neuronal suppression for exploitation of new sensory evidence
Decision-making in noisy environments with constant sensory evidence involves integrating sequentially-sampled evidence, a strategy formalized by diffusion models which is supported by decades behavioral and neural findings. By contrast, it is unknown whether this strategy is also used during decision-making when the underlying sensory evidence is expected to change. Here, we trained monkeys to identify the dominant color of a dynamically refreshed checkerboard pattern that doesn't become informative until after a variable delay. Animals' behavioral responses were briefly suppressed after an abrupt change in evidence, and many neurons in the frontal eye field displayed a corresponding dip in activity at this time, similar to the dip frequently observed after stimulus onset. Generalized drift-diffusion models revealed that behavior and neural activity were consistent with a brief suppression of motor output without a change in evidence accumulation itself, in contrast to the popular belief that evidence accumulation is paused or reset. These results suggest that a brief interruption in motor preparation is an important strategy for dealing with changing evidence during perceptual decision making.
Transdiagnostic approaches to understanding neurodevelopment
Macroscopic brain organisation emerges early in life, even prenatally, and continues to develop through adolescence and into early adulthood. The emergence and continual refinement of large-scale brain networks, connecting neuronal populations across anatomical distance, allows for increasing functional integration and specialisation. This process is thought crucial for the emergence of complex cognitive processes. But how and why is this process so diverse? We used structural neuroimaging collected from a large diverse cohort, to explore how different features of macroscopic brain organisation are associated with diverse cognitive trajectories. We used diffusion-weighted imaging (DWI) to construct whole-brain white-matter connectomes. A simulated attack on each child's connectome revealed that some brain networks were strongly organized around highly connected 'hubs'. The more children's brains were critically dependent on hubs, the better their cognitive skills. Conversely, having poorly integrated hubs was a very strong risk factor for cognitive and learning difficulties across the sample. We subsequently developed a computational framework, using generative network modelling (GNM), to model the emergence of this kind of connectome organisation. Relatively subtle changes within the wiring rules of this computational framework give rise to differential developmental trajectories, because of small biases in the preferential wiring properties of different nodes within the network. Finally, we were able to use this GNM to implicate the molecular and cellular processes that govern these different growth patterns.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.
Swarms for people
As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.
Gap Junction Coupling between Photoreceptors
Simply put, the goal of my research is to describe the neuronal circuitry of the retina. The organization of the mammalian retina is certainly complex but it is not chaotic. Although there are many cell types, most adhere to a relatively constant morphology and they are distributed in non-random mosaics. Furthermore, each cell type ramifies at a characteristic depth in the retina and makes a stereotyped set of synaptic connections. In other words, these neurons form a series of local circuits across the retina. The next step is to identify the simplest and commonest of these repeating neural circuits. They are the building blocks of retinal function. If we think of it in this way, the retina is a fabulous model for the rest of the CNS. We are interested in identifying specific circuits and cell types that support the different functions of the retina. For example, there appear to be specific pathways for rod and cone mediated vision. Rods are used under low light conditions and rod circuitry is specialized for high sensitivity when photons are scarce (when you’re out camping, starlight). The hallmark of the rod-mediated system is monochromatic vision. In contrast, the cone circuits are specialized for high acuity and color vision under relatively bright or daylight conditions. Individual neurons may be filled with fluorescent dyes under visual control. This is achieved by impaling the cell with a glass microelectrode using a 3D micromanipulator. We are also interested in the diffusion of dye through coupled neuronal networks in the retina. The dye filled cells are also combined with antibody labeling to reveal neuronal connections and circuits. This triple-labeled material may be viewed and reconstructed in 3 dimensions by multi-channel confocal microscopy. We have our own confocal microscope facility in the department and timeslots are available to students in my lab.
Spatio-temporal large-scale organization of the trimodal connectome derived from concurrent EEG-fMRI and diffusion MRI
While time-averaged dynamics of brain functional connectivity are known to reflect the underlying structural connections, the exact relationship between large-scale function and structure remains an unsolved issue in network neuroscience. Large-scale networks are traditionally observed by correlation of fMRI timecourses, and connectivity of source-reconstructed electrophysiological measures are less prominent. Accessing the brain by using multimodal recordings combining EEG, fMRI and diffusion MRI (dMRI) can help to refine the understanding of the spatio-temporal organization of both static and dynamic brain connectivity. In this talk I will discuss our prior findings that whole-brain connectivity derived from source-reconstructed resting-state (rs) EEG is both linked to the rs-fMRI and dMRI connectome. The EEG connectome provides complimentary information to link function to structure as compared to an fMRI-only perspective. I will present an approach extending the multimodal data integration of concurrent rs-EEG-fMRI to the temporal domain by combining dynamic functional connectivity of both modalities to better understand the neural basis of functional connectivity dynamics. The close relationship between time-varying changes in EEG and fMRI whole-brain connectivity patterns provide evidence for spontaneous reconfigurations of the brain’s functional processing architecture. Finally, I will talk about data quality of connectivity derived from concurrent EEG-fMRI recordings and how the presented multimodal framework could be applied to better understand focal epilepsy. In summary this talk will give an overview of how to integrate large-scale EEG networks with MRI-derived brain structure and function. In conclusion EEG-based connectivity measures not only are closely linked to MRI-based measures of brain structure and function over different time-scales, but also provides complimentary information on the function of underlying brain organization.
Perception, attention, visual working memory, and decision making: The complete consort dancing together
Our research investigates how processes of attention, visual working memory (VWM), and decision-making combine to translate perception into action. Within this framework, the role of VWM is to form stable representations of transient stimulus events that allow them to be identified by a decision process, which we model as a diffusion process. In psychophysical tasks, we find the capacity of VWM is well defined by a sample-size model, which attributes changes in VWM precision with set-size to differences in the number evidence samples recruited to represent stimuli. In the first part of the talk, I review evidence for the sample-size model and highlight the model's strengths: It provides a parameter-free characterization of the set-size effect; it has plausible neural and cognitive interpretations; an attention-weighted version of the model accounts for the power-law of VWM, and it accounts for the selective updating of VWM in multiple-look experiments. In the second part of the talk, I provide a characterization of the theoretical relationship between two-choice and continuous-outcome decision tasks using the circular diffusion model, highlighting their common features. I describe recent work characterizing the joint distributions of decision outcomes and response times in continuous-outcome tasks using the circular diffusion model and show that the model can clearly distinguish variable-precision and simple mixture models of the evidence entering the decision process. The ability to distinguish these kinds of processes is central to current VWM studies.
Imaging memory consolidation in wakefulness and sleep
New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.
Bayesian distributional regression models for cognitive science
The assumed data generating models (response distributions) of experimental or observational data in cognitive science have become increasingly complex over the past decades. This trend follows a revolution in model estimation methods and a drastic increase in computing power available to researchers. Today, higher-level cognitive functions can well be captured by and understood through computational cognitive models, a common example being drift diffusion models for decision processes. Such models are often expressed as the combination of two modeling layers. The first layer is the response distribution with corresponding distributional parameters tailored to the cognitive process under investigation. The second layer are latent models of the distributional parameters that capture how those parameters vary as a function of design, stimulus, or person characteristics, often in an additive manner. Such cognitive models can thus be understood as special cases of distributional regression models where multiple distributional parameters, rather than just a single centrality parameter, are predicted by additive models. Because of their complexity, distributional models are quite complicated to estimate, but recent advances in Bayesian estimation methods and corresponding software make them increasingly more feasible. In this talk, I will speak about the specification, estimation, and post-processing of Bayesian distributional regression models and how they can help to better understand cognitive processes.
Inertial active soft matter
Active particles which are self-propelled by converting energy into mechanical motion represent an expanding research realm in physics and chemistry. For micron-sized particles moving in a liquid (``microswimmers''), most of the basic features have been described by using the model of overdamped active Brownian motion [1]. However, for macroscopic particles or microparticles moving in a gas, inertial effects become relevant such that the dynamics is underdamped. Therefore, recently, active particles with inertia have been described by extending the active Brownian motion model to active Langevin dynamics which include inertia [2]. In this talk, recent developments of active particles with inertia (``microflyers'', ``hoppers'' or ``runners'') are summarized including: inertial delay effects between particle velocity and self-propulsion direction [3], tuning of the long-time self-diffusion by the moment of inertia [3], the influence of inertia on motility-induced phase separation and the cluster growth exponent [4], and the formation of active micelles (“rotelles”) by using inertial active surfactants. References [1] C. Bechinger, R. di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, G. Volpe, Reviews of Modern Physics 88, 045006 (2016). [2] H. Löwen, Journal of Chemical Physics 152, 040901 (2020). [3] C. Scholz, S. Jahanshahi, A. Ldov, H. Löwen, Nature Communications 9, 5156 (2018). [4] S. Mandal, B. Liebchen, H. Löwen, Physical Review Letters 123, 228001 (2019). [5] C. Scholz, A. Ldov, T. Pöschel, M. Engel, H. Löwen, Surfactants and rotelles in active chiral fluids, will be published
Mixed active-passive suspensions: from particle entrainment to spontaneous demixing
Understanding the properties of active matter is a challenge which is currently driving a rapid growth in soft- and bio-physics. Some of the most important examples of active matter are at the microscale, and include active colloids and suspensions of microorganisms, both as a simple active fluid (single species) and as mixed suspensions of active and passive elements. In this last class of systems, recent experimental and theoretical work has started to provide a window into new phenomena including activity-induced depletion interactions, phase separation, and the possibility to extract net work from active suspensions. Here I will present our work on a paradigmatic example of mixed active-passive system, where the activity is provided by swimming microalgae. Macro- and micro-scopic experiments reveal that microorganism-colloid interactions are dominated by rare close encounters leading to large displacements through direct entrainment. Simulations and theoretical modelling show that the ensuing particle dynamics can be understood in terms of a simple jump-diffusion process, combining standard diffusion with Poisson-distributed jumps. Entrainment length can be understood within the framework of Taylor dispersion as a competition between advection by the no-slip surface of the cell body and microparticle diffusion. Building on these results, we then ask how external control of the dynamics of the active component (e.g. induced microswimmer anisotropy/inhomogeneity) can be used to alter the transport of passive cargo. As a first step in this direction, we study the behaviour of mixed active-passive systems in confinement. The resulting spatial inhomogeneity in swimmers’ distribution and orientation has a dramatic effect on the spatial distribution of passive particles, with the colloids accumulating either towards the boundaries or towards the bulk of the sample depending on the size of the container. We show that this can be used to induce the system to de-mix spontaneously.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.
How to simulate and analyze drift-diffusion models of timing and decision making
My talk will discuss the use of some of these four, simple Matlab functions to simulate models of timing, and to fit models to empirical data. Feel free to examine the code and the relatively brief book chapter that explains the code before the talk if you would like to learn more about computational/mathematical modeling.
Mapping early brain network changes in neurodegenerative and cerebrovascular disorders: a longitudinal perspective
The spatial patterning of each neurodegenerative disease relates closely to a distinct structural and functional network in the human brain. This talk will mainly describe how brain network-sensitive neuroimaging methods such as resting-state fMRI and diffusion MRI can shed light on brain network dysfunctions associated with pathology and cognitive decline from preclinical to clinical dementia. I will first present our findings from two independent datasets on how amyloid and cerebrovascular pathology influence brain functional networks cross-sectionally and longitudinally in individuals with mild cognitive impairment and dementia. Evidence on longitudinal functional network organizational changes in healthy older adults and the influence of APOE genotype will be presented. In the second part, I will describe our work on how different pathology influences brain structural network and white matter microstructure. I will also touch on some new data on how brain network integrity contributes to behavior and disease progression using multivariate or machine learning approaches. These findings underscore the importance of studying selective brain network vulnerability instead of individual region and longitudinal design. Further developed with machine learning approaches, multimodal network-specific imaging signatures will help reveal disease mechanisms and facilitate early detection, prognosis and treatment search of neuropsychiatric disorders.
Slowing down the body slows down time (perception)
Interval timing is a fundamental component action, and is susceptible to motor-related temporal distortions. Previous studies have shown that movement biases temporal estimates, but have primarily considered self-modulated movement only. However, real-world encounters often include situations in which movement is restricted or perturbed by environmental factors. In the following experiments, we introduced viscous movement environments to externally modulate movement and investigated the resulting effects on temporal perception. In two separate tasks, participants timed auditory intervals while moving a robotic arm that randomly applied four levels of viscosity. Results demonstrated that higher viscosity led to shorter perceived durations. Using a drift-diffusion model and a Bayesian observer model, we confirmed these biasing effects arose from perceptual mechanisms, instead of biases in decision making. These findings suggest that environmental perturbations are an important factor in movement-related temporal distortions, and enhance the current understanding of the interactions of motor activity and cognitive processes. https://www.biorxiv.org/content/10.1101/2020.10.26.355396v1
How does the cortex integrate conflicting time-information? A model of temporal averaging
In daily life, we consistently make decisions in pursuit of some goal. Many decisions are informed by multiple sources of information. Unfortunately, these sources often provide ambiguous information about what course of action to take. Therefore, determining how the brain integrates information to resolve this ambiguity is key to understanding the neural mechanisms of decision-making. In the domain of time, this topic can be studied by training subjects to predict when a future event will occur based on distinct cues (e.g., tone, light, etc.). If multiple cues are presented simultaneously and their cue-to-event intervals differ (e.g., tone-10s + light-30s), subjects will often expect the event to occur at the average of their intervals. This ‘temporal averaging’ effect is presumably how the timing system resolves ambiguous time-information. The neural mechanisms of temporal averaging are currently unclear. Here, we will propose how temporal averaging could emerge in cortical circuits using a simple modification of a ‘drift-diffusion’ model of timing.
Attentional Foundations of Framing Effects
Framing effects in individual decision-making have puzzled economists for decades because they are hard, if at all, to explain with rational choice theories. Why should mere changes in the description of a choice problem affect decision-making? Here, we examine the hypothesis that changes in framing cause changes in the allocation of attention to the different options – measured via eye-tracking – and give rise to changes in decision-making. We document that the framing of a sure alternative as a gain – as opposed to a loss – in a risk-taking task increases the attentional advantage of the sure option and induces a higher choice frequency of that option – a finding that is predicted by the attentional drift-diffusion model (aDDM). The model also correctly predicts other key findings such as that the increased attentional advantage of the sure option in the gain frame should also lead quicker decisions in this frame. In addition, the data reveal that increasing risk aversion at higher stake sizes may also be driven by attentional processes because the sure option receives significantly more attention – regardless of frame – at higher stakes. We also corroborate the causal impact of framing-induced changes of attention on choice with an additional experiment that manipulates attention exogenously. Finally, to study the precise mechanisms underlying the framing effect we structurally estimate an aDDM that allows for frame and option-dependent parameters. The estimation results indicate that – in addition to the direct effects of framing-induced changes in attention on choice – the gain frame also causes (i) an increase in the attentional discount of the gamble and (ii) an increased concavity of utility. Our findings suggest that the traditional explanation of framing effects in risky choice in terms of a more concave value function in the gain domain is seriously incomplete and that attentional mechanisms as hypothesized in the aDDM play a key role.
Glia neuron metabolic interactions in Drosophila
To function properly, the nervous system consumes vast amounts of energy, which is mostly provided by carbohydrate metabolism. Neurons are very sensitive to changes in the extracellular fluid surrounding them, which necessitated shielding of the nervous system from fluctuating solute concentrations in circulation. This is achieved by the blood-brain barrier (BBB) that prevents paracellular diffusion of solutes into the nervous system. This in turn also means that all nutrients that are needed e.g. for sufficient energy supply need to be transported over the BBB. We use Drosophila as a model system to better understand the metabolic homeostasis in the central nervous system. Glial cells play essential roles in both nutrient uptake and neural energy metabolism. Carbohydrate transport over the glial BBB is well-regulated and can be adapted to changes in carbohydrate availability. Furthermore, Drosophila glial cell are highly glycolytic cells that support the rather oxidative metabolism of neurons. Upon perturbations of carbohydrate metabolism, the glial cells prove to be metabolically very flexible and able to adapt to changing circumstances. I will summarize what we know about carbohydrate transport at the Drosophila BBB and about the metabolic coupling between neurons and glial cells. Our data shows that many basic features of neural metabolism are well conserved between the fly and mammals.
Continuum modelling of active fluids beyond the generalised Taylor dispersion
The Smoluchowski equation has often been used as the starting point of many continuum models of active suspensions. However, its six-dimensional nature depending on time, space and orientation requires a huge computational cost, fundamentally limiting its use for large-scale problems, such as mixing and transport of active fluids in turbulent flows. Despite the singular nature in strain-dominant flows, the generalised Taylor dispersion (GTD) theory (Frankel & Brenner 1991, J. Fluid Mech. 230:147-181) has been understood to be one of the most promising ways to reduce the Smoluchowski equation into an advection-diffusion equation, the mean drift and diffusion tensor of which rely on ‘local’ flow information only. In this talk, we will introduce an exact transformation of the Smoluchowski equation into such an advection-diffusion equation requiring only local flow information. Based on this transformation, a new advection-diffusion equation will subsequently be proposed by taking an asymptotic analysis in the limit of small particle velocity. With several examples, it will be demonstrated that the new advection-diffusion model, non-singular in strain-dominant flows, outperforms the GTD theory.
Diffusion Tempering Improves Parameter Estimation with Probabilistic Integrators for Hodgkin Huxley Models
Bernstein Conference 2024
Latent Diffusion for Neural Spiking Data
Bernstein Conference 2024
Quantifying the signal and noise of decision processes during dual tasks with an efficient two-dimensional drift-diffusion model
Bernstein Conference 2024
TSG-DDT: Time-Series Generative Denoising Diffusion Transformers
Bernstein Conference 2024
Conditional Diffusion Framework for Analyzing Neural Dynamics Across Multiple Contexts
COSYNE 2025
Latent diffusion for neural spiking data for generating realistic neural time series
COSYNE 2025
Modeling neural switching via drift-diffusion models
COSYNE 2025
Corticocerebellar tracts and their relationship to anticipatory control deficits in children with cerebral palsy: A diffusion neuroimaging study
FENS Forum 2024
An in-depth investigation of motor and non-motor symptoms using diffusion tensor imaging (DTI) measures in Parkinson's disease (PD) patients: A PPMI data analysis
FENS Forum 2024
Investigating the association between the novel GAP-43 concentration with diffusion tensor imaging indices in Alzheimer's dementia continuum
FENS Forum 2024
skiftiTools: An R package for visualizing and manipulating skeletonized brain diffusion tensor imaging data for versatile statistics of choice
FENS Forum 2024
Time and effect of drugs diffusion in neuronal networks derived from human induced pluripotent stem cells
FENS Forum 2024