Motion Sequencing (MoSeq): a method to discover the syllables and grammar that comprise mouse behavior.
Motion Sequencing (MoSeq) is an unsupervised machine learning method used to parse mouse behavior into a set of re-usable sub-second motifs called syllables (Wiltschko et al., 2015). Because MoSeq relies on unsupervised machine learning, MoSeq discovers the set of syllables and grammar expressed in any given experiment. By combining MoSeq with electrophysiology, multi-color photometry, and miniscope methods, neural correlates for syllables have recently been identified in the dorsolateral striatum (Markowitz et al., 2018).
Below you will find instructions for accessing the Github repositories, which include instructions for assembling acquisition hardware, the MoSeq code, and a wiki describing how to use MoSeq. MoSeq code and training materials are freely available for not-for-profit academic researchers. Interested users need to send:
to the email MoSeq@hms.harvard.edu with their institutional email to gain access to the GitHub repositories and MoSeq DockerHub (MoSeq DockerHub access upon request). You can find setup instructions and tutorials on the MoSeq2 wiki once you have access to our GitHub repositories!
If you are downloading the code on behalf of a core facility, each end users needs to sign a EULA.
Please tell us what you think by filling out this user survey
MoSeq 3D depth video recordings to learn about the structure of mouse behavior. A Microsoft Kinect depth sensor acquires depth video at 30 Hz as a mouse explores a featureless arena. The MoSeq extraction step indentifies the recording arena, finds the mouse, and aligns the body to face one direction.
The gif on the right The gif below shows an example preview of the extraction step.
The mouse shown in the top left is the result of the extraction, while the bottom right section shows the un-extracted mouse running on the arena floor after background subtraction. Colors represent height from the floor.
After extraction, a probabilistic time-series model (an autoregressive hidden Markov model, or AR-HMM) parses behavior into a set of re-usable sub-second motifs called syllables.
This segmentation naturally yields boundaries between syllables, and therefore also reveals the structure that governs the interconnections between syllables over time which we refer to as behavioral grammar.
The schematic on the leftbelow shows a sequence of syllables identified by an AR-HMM, with three examples expanded.
The Jupyter notebooks that come with the MoSeq2 package are packed with widgets and tools to help with extraction and analysis. The tool shown here makes labeling the set of syllables a model discovers a piece of cake.
Because the AR-HMM densely labels mouse behavior, every syllable instance is sandwiched between other syllables. Using this information, we can ask how often certain syllables transition between one another.
The screenshot on the left below shows a tool we use to represent and explore connections between syllables in graphical form.
We are hosting a tutorial workshop on Wednesday, September 28th, 2022 at 2:00-4:30PM EST.
Please fill out the registration form before Saturday, September 24th, 2022. Thank you!
4 more tutorial workshop in 2023 coming soon!
We are hosting a tutorial workshop on Wednesday, November 2nd, 2022 at 1:30-4:00PM EST.
Registration opens soon!
We are hosting a tutorial workshop on Tuesday, April 5th, 2022 at 11:30-2PM EST.
Registration closed. Thank you!
We are hosting a tutorial workshop on Thursday, March 3rd, 2022 at 1:30-4PM EST.
Registration closed. Thank you!
Join our Slack channel! For general inquiries or reaching the developers at MoSeq@hms.harvard.edu.