ePoster
Time-warped state space models for distinguishing movement type and vigor
Julia Costacurtaand 7 co-authors
COSYNE 2022 (2022)
Mar 17, 2022
Lisbon, Portugal
Presentation
Mar 17, 2022
Event Information
Poster
View posterAbstract
Quantitative methods that cluster videos of animal behavior into repeated syllables (sometimes called “motifs") have become fundamental tools for systems neuroscientists and neuroethologists. A popular approach is to use autoregressive hidden Markov models (ARHMMs) to identify behavioral syllables in the absence of labeled training data. This model, called MoSeq, has been shown to successfully segment depth-camera videos of mouse behavior into syllables that are familiar to a human observer. However, one issue with MoSeq is that it sorts behaviors which appear very similar to the human eye (e.g. rears occurring at different speeds) into distinct syllables. These duplicated clusters complicate downstream analysis, and encourage ad hoc post-processing steps such as manually merging visually similar clusters.
Here, we extend the MoSeq model by incorporating a time-varying “vigor” parameter, which is decoupled from syllable identity. That is, each frame of the video is assigned not only a behavioral syllable but also a time constant which represents the relative vigor at which the syllable occurs. The addition of this time constant allows similar actions performed at different vigors to be grouped under the same syllable. We then show that our “time-warped” MoSeq achieves similar performance to standard MoSeq on mouse depth-camera data while using fewer behavioral syllables. Finally, we compare the time-warped MoSeq results of mice treated with saline and amphetamine to show the utility of the time constant parameter.