Hierarchical Model
hierarchical model
Movement planning as a window into hierarchical motor control
The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
Learning from unexpected events in the neocortical microcircuit
Predictive learning hypotheses posit that the neocortex learns a hierarchical model of the structure of features in the environment. Under these hypotheses, expected or predictable features are differentiated from unexpected ones by comparing bottom-up and top-down streams of data, with unexpected features then driving changes in the representation of incoming stimuli. This is supported by numerous studies in early sensory cortices showing that pyramidal neurons respond particularly strongly to unexpected stimulus events. However, it remains unknown how their responses govern subsequent changes in stimulus representations, and thus, govern learning. Here, I present results from our study of layer 2/3 and layer 5 pyramidal neurons imaged in primary visual cortex of awake, behaving mice using two-photon calcium microscopy at both the somatic and distal apical planes. Our data reveals that individual neurons and distal apical dendrites show distinct, but predictable changes in unexpected event responses when tracked over several days. Considering existing evidence that bottom-up information is primarily targeted to somata, with distal apical dendrites receiving the bulk of top-down inputs, our findings corroborate hypothesized complementary roles for these two neuronal compartments in hierarchical computing. Altogether, our work provides novel evidence that the neocortex indeed instantiates a predictive hierarchical model in which unexpected events drive learning.
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Probing differences in decision process settings across contexts and individuals through joint RT-EEG hierarchical modelling
FENS Forum 2024