Task Performance
task performance
Neural makers of lapses in attention during sustained ‘real-world’ task performance
Lapses in attention are ubiquitous and, unfortunately, the cause of many tragic accidents. One potential solution may be to develop assistance systems which can use objective, physiological signals to monitor attention levels and predict a lapse in attention before it occurs. As it stands, it is unclear which physiological signals are the most reliable markers of inattention, and even less is known about how reliably they will work in a more naturalistic setting. My project aims to address these questions across two experiments: a lab-based experiment and a more ‘real-world’ experiment. In this talk I will present the findings from my lab experiment, in which we combined EEG and pupillometry to detect markers of inattention during two computerised sustained attention tasks. I will then present the methods for my second, more ‘naturalistic’ experiment in which we use the same methods (EEG and pupillometry) to examine whether these markers can still be extracted from noisier data.
Hippocampal network dynamics during impaired working memory in epileptic mice
Memory impairment is a common cognitive deficit in temporal lobe epilepsy (TLE). The hippocampus is severely altered in TLE exhibiting multiple anatomical changes that lead to a hyperexcitable network capable of generating frequent epileptic discharges and seizures. In this study we investigated whether hippocampal involvement in epileptic activity drives working memory deficits using bilateral LFP recordings from CA1 during task performance. We discovered that epileptic mice experienced focal rhythmic discharges (FRDs) while they performed the spatial working memory task. Spatial correlation analysis revealed that FRDs were often spatially stable on the maze and were most common around reward zones (25 ‰) and delay zones (50 ‰). Memory performance was correlated with stability of FRDs, suggesting that spatially unstable FRDs interfere with working memory codes in real time.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
NMC4 Short Talk: Different hypotheses on the role of the PFC in solving simple cognitive tasks
Low-dimensional population dynamics can be observed in neural activity recorded from the prefrontal cortex (PFC) of subjects performing simple cognitive tasks. Many studies have shown that recurrent neural networks (RNNs) trained on the same tasks can reproduce qualitatively these state space trajectories, and have used them as models of how neuronal dynamics implement task computations. The PFC is also viewed as a conductor that organizes the communication between cortical areas and provides contextual information. It is then not clear what is its role in solving simple cognitive tasks. Do the low-dimensional trajectories observed in the PFC really correspond to the computations that it performs? Or do they indirectly reflect the computations occurring within the cortical areas projecting to the PFC? To address these questions, we modelled cortical areas with a modular RNN and equipped it with a PFC-like cognitive system. When trained on cognitive tasks, this multi-system brain model can reproduce the low-dimensional population responses observed in neuronal activity as well as classical RNNs. Qualitatively different mechanisms can emerge from the training process when varying some details of the architecture such as the time constants. In particular, there is one class of models where it is the dynamics of the cognitive system that is implementing the task computations, and another where the cognitive system is only necessary to provide contextual information about the task rule as task performance is not impaired when preventing the system from accessing the task inputs. These constitute two different hypotheses about the causal role of the PFC in solving simple cognitive tasks, which could motivate further experiments on the brain.
Computational Principles of Event Memory
Our ability to understand ongoing events depends critically on general knowledge about how different kinds of situations work (schemas), and also on recollection of specific instances of these situations that we have previously experienced (episodic memory). The consensus around this general view masks deep questions about how these two memory systems interact to support event understanding: How do we build our library of schemas? and how exactly do we use episodic memory in the service of event understanding? Given rich, continuous inputs, when do we store and retrieve episodic memory “snapshots”, and how are they organized so as to ensure that we can retrieve the right snapshots at the right time? I will develop predictions about how these processes work using memory augmented neural networks (i.e., neural networks that learn how to use episodic memory in the service of task performance), and I will present results from relevant fMRI and behavioral studies.
Timing errors and decision making
Error monitoring refers to the ability to monitor one's own task performance without explicit feedback. This ability is studied typically in two-alternative forced-choice (2AFC) paradigms. Recent research showed that humans can also keep track of the magnitude and direction of errors in different magnitude domains (e.g., numerosity, duration, length). Based on the evidence that suggests a shared mechanism for magnitude representations, we aimed to investigate whether metric error monitoring ability is commonly governed across different magnitude domains. Participants reproduced/estimated temporal, numerical, and spatial magnitudes after which they rated their confidence regarding first order task performance and judged the direction of their reproduction/estimation errors. Participants were also tested in a 2AFC perceptual decision task and provided confidence ratings regarding their decisions. Results showed that variability in reproductions/estimations and metric error monitoring ability, as measured by combining confidence and error direction judgements, were positively related across temporal, spatial, and numerical domains. Metacognitive sensitivity in these metric domains was also positively associated with each other but not with metacognitive sensitivity in the 2AFC perceptual decision task. In conclusion, the current findings point at a general metric error monitoring ability that is shared across different metric domains with limited generalizability to perceptual decision-making.
The attentional requirement of unconscious processing
The tight relationship between attention and conscious perception has been extensively researched in the past decades. However, whether attentional modulation extended to unconscious processes remained largely unknown, particularly when it came to abstract and high-level processing. I will talk about a recent study where we utilized the Stroop paradigm to show that task load gates unconscious semantic processing. In a series of psychophysical experiments, the unconscious word semantics influenced conscious task performance only under the low task load condition, but not the high task load condition. Intriguingly, with enough practice in the high task load condition, the unconscious effect reemerged. These findings suggest a competition of attentional resources between unconscious and conscious processes, challenging the automaticity account of unconscious processing.
Tuning dumb neurons to task processing - via homeostasis
Homeostatic plasticity plays a key role in stabilizing neural network activity. But what is its role in neural information processing? We showed analytically how homeostasis changes collective dynamics and consequently information flow - depending on the input to the network. We then studied how input and homeostasis on a recurrent network of LIF neurons impacts information flow and task performance. We showed how we can tune the working point of the network, and found that, contrary to previous assumptions, there is not one optimal working point for a family of tasks, but each task may require its own working point.
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Neural correlates of cognitive control across the adult lifespan
Cognitive control involves the flexible allocation of mental resources during goal-directed behaviour and comprises three correlated but distinct domains—inhibition, task shifting, and working memory. Healthy ageing is characterised by reduced cognitive control. Professor Cheryl Grady and her team have been studying the influence of age differences in large-scale brain networks on the three control processes in a sample of adults from 20 to 86 years of age. In this webinar, Professor Cheryl Grady will describe three aspects of this work: 1) age-related dedifferentiation and reconfiguration of brain networks across the sub-domains 2) individual differences in the relation of task-related activity to age, structural integrity and task performance for each sub-domain 3) modulation of brain signal variability as a function of cognitive load and age during working memory. This research highlights the reduction in dynamic range of network activity that occurs with ageing and how this contributes to age differences in cognitive control. Cheryl Grady is a senior scientist at the Rotman Research Institute at Baycrest, and Professor in the departments of Psychiatry and Psychology at the University of Toronto. She held the Canada Research Chair in Neurocognitive Aging from 2005-2018 and was elected as a Fellow of the Royal Society of Canada in 2019. Her research uses MRI to determine the role of brain network connectivity in cognitive ageing.
Neural heterogeneity promotes robust learning
The brain has a hugely diverse, heterogeneous structure. By contrast, many functional neural models are homogeneous. We compared the performance of spiking neural networks trained to carry out difficult tasks, with varying degrees of heterogeneity. Introducing heterogeneity in membrane and synapse time constants substantially improved task performance, and made learning more stable and robust across multiple training methods, particularly for tasks with a rich temporal structure. In addition, the distribution of time constants in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.
Multitask performance humans and deep neural networks
Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.
Contributions and synaptic basis of diverse cortical neuron responses to flexible task performance
COSYNE 2025
The structure of individuality in micro-behavioral features of task performance
COSYNE 2025
Exploring the interplay of glucocorticoids, daily timing, sleep, and psychology-based task performance
FENS Forum 2024
Human local field potential brain recordings during a multilingual battery of cognitive and eye-tracking task performance
FENS Forum 2024