Unsupervised identification of rat behavioral motifs across timescales

11 Jul 2017  ·  Haozhe Shan, Peggy Mason ·

Behaviors of several laboratory animals can be modeled as sequences of stereotyped behaviors, or behavioral motifs. However, identifying such motifs is a challenging problem. Behaviors have a multi-scale structure: the animal can be simultaneously performing a small-scale motif and a large-scale one (e.g. \textit{chewing} and \textit{feeding}). Motifs are compositional: a large-scale motif is a chain of smaller-scale ones, folded in (some behavioral) space in a specific manner. We demonstrate an approach which captures these structures, using rat locomotor data as an example. From the same dataset, we used a preprocessing procedure to create different versions, each describing motifs of a different scale. We then trained several Hidden Markov Models (HMMs) in parallel, one for each dataset version. This approach essentially forced each HMM to learn motifs on a different scale, allowing us to capture behavioral structures lost in previous approaches. By comparing HMMs with models representing different null hypotheses, we found that rat locomotion was composed of distinct motifs from second scale to minute scale. We found that transitions between motifs were modulated by rats' location in the environment, leading to non-Markovian transitions. To test the ethological relevance of motifs we discovered, we compared their usage between rats with differences in a high-level trait, prosociality. We found that these rats had distinct motif repertoires, suggesting that motif usage statistics can be used to infer internal states of rats. Our method is therefore an efficient way to discover multi-scale, compositional structures in animal behaviors. It may also be applied as a sensitive assay for internal states.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here