Anipose

PyPI version DOI License: LGPL v3

Anipose is an open-source toolkit for robust, markerless 3D tracking of animal behavior from multiple camera views. It leverages the machine learning toolbox DeepLabCut to track keypoints in 2D, then triangulates across camera views to estimate 3D pose.

Anipose consists of four modular components:

  1. a 3D calibration module designed to minimize the influence of outliers

  2. a set of filters to resolve errors in 2D detections

  3. a triangulation module that integrates temporal and spatial constraints to obtain accurate 3D trajectories despite 2D tracking errors

  4. a pipeline for efficiently processing large numbers of videos.

These modules are packaged together within Anipose, but the calibration and triangulation functions are also available as an independent library (aniposelib) for use without the full pipeline and applications beyond pose estimation.

The name Anipose comes from Animal Pose, but it also sounds like “any pose”.

Check out the Anipose paper for more information.

Demos

_images/tracking_3cams_full_slower5.gif

Videos of flies by Evyn Dickinson (slowed 5x), Tuthill Lab

_images/hand-demo.gif

Videos of hand by Katie Rupp

Documentation

Contributors

Code and documentation:

  • Lili Karashchuk

  • Katie Rupp

Testing datasets:

  • Evyn S. Dickinson and Sarah Walling-Bell (fly)

  • Elischa Sanders and Eiman Azim (mouse)

Mentorship:

Pull requests:

  • Julian Pitney (manually verify calibration board detections)

References

Here are some references for DeepLabCut and other things this project relies upon:

  • Mathis et al, 2018, “DeepLabCut: markerless pose estimation of user-defined body parts with deep learning”

  • Romero-Ramirez et al, 2018, “Speeded up detection of squared fiducial markers”

  • Garrido-Jurado et al, 2016, “Generation of fiducial marker dictionaries using Mixed Integer Linear Programming”

Indices and tables