It is recommended to symlink the dataset root to $MMPOSE/data
.
If your folder structure is different, you may need to change the corresponding paths in config files.
MMPose supported datasets:
- COCO
- COCO-WholeBody
- MPII
- MPII-TRB
- AI Challenger
- OCHuman
- CrowdPose
- sub-JHMDB
- OneHand10K
- FreiHand
- CMU Panoptic HandDB
- InterHand2.6M
- Human3.6M
- MPI-INF-3DHP
- LSP
- LSPET
For COCO data, please download from COCO download, 2017 Train/Val is needed for COCO keypoints training and validation. 2014 Train is needed for human mesh estimation training. HRNet-Human-Pose-Estimation provides person detection result of COCO val2017 to reproduce our multi-person pose estimation results. Please download from OneDrive. Download and extract them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── coco
│-- annotations
│ │-- person_keypoints_train2017.json
│ |-- person_keypoints_val2017.json
|-- person_detection_results
| |-- COCO_val2017_detections_AP_H_56_person.json
│-- train2014
│ ├── COCO_train2014_000000000009.jpg
│ ├── COCO_train2014_000000000025.jpg
│ ├── COCO_train2014_000000000030.jpg
│-- ...
│-- train2017
│ │-- 000000000009.jpg
│ │-- 000000000025.jpg
│ │-- 000000000030.jpg
│ │-- ...
`-- val2017
│-- 000000000139.jpg
│-- 000000000285.jpg
│-- 000000000632.jpg
│-- ...
For COCO-WholeBody datatset, images can be downloaded from COCO download, 2017 Train/Val is needed for COCO keypoints training and validation. Download COCO-WholeBody annotations for COCO-WholeBody annotations for Train / Validation (Google Drive). Download person detection result of COCO val2017 from OneDrive. Download and extract them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── coco
│-- annotations
│ │-- coco_wholebody_train_v1.0.json
│ |-- coco_wholebody_val_v1.0.json
|-- person_detection_results
| |-- COCO_val2017_detections_AP_H_56_person.json
│-- train2017
│ │-- 000000000009.jpg
│ │-- 000000000025.jpg
│ │-- 000000000030.jpg
│ │-- ...
`-- val2017
│-- 000000000139.jpg
│-- 000000000285.jpg
│-- 000000000632.jpg
│-- ...
Please also install the latest version of Extended COCO API (version>=1.5) to support COCO-WholeBody evaluation:
pip install xtcocotools
For MPII data, please download from MPII Human Pose Dataset. We have converted the original annotation files into json format, please download them from mpii_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── mpii
|── annotations
| |── mpii_gt_val.mat
| |── mpii_test.json
| |── mpii_train.json
| |── mpii_trainval.json
| `── mpii_val.json
`── images
|── 000001163.jpg
|── 000003072.jpg
During training and inference, the prediction result will be saved as '.mat' format by default. We also provide a tool to convert this '.mat' to more readable '.json' format.
python tools/mat2json ${PRED_MAT_FILE} ${GT_JSON_FILE} ${OUTPUT_PRED_JSON_FILE}
For example,
python tools/mat2json work_dirs/res50_mpii_256x256/pred.mat data/mpii/annotations/mpii_val.json pred.json
For MPII-TRB data, please download from MPII Human Pose Dataset. Please download the annotation files from mpii_trb_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── mpii
|── annotations
| |── mpii_trb_train.json
| |── mpii_trb_val.json
`── images
|── 000001163.jpg
|── 000003072.jpg
For AIC data, please download from AI Challenger 2017, 2017 Train/Val is needed for keypoints training and validation. Please download the annotation files from aic_annotations. Download and extract them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── aic
│-- annotations
│ │-- aic_train.json
│ |-- aic_val.json
│-- ai_challenger_keypoint_train_20170902
│ │-- keypoint_train_images_20170902
│ │ │-- 0000252aea98840a550dac9a78c476ecb9f47ffa.jpg
│ │ │-- 000050f770985ac9653198495ef9b5c82435d49c.jpg
│ │ │-- ...
`-- ai_challenger_keypoint_validation_20170911
│-- keypoint_validation_images_20170911
│-- 0002605c53fb92109a3f2de4fc3ce06425c3b61f.jpg
│-- 0003b55a2c991223e6d8b4b820045bd49507bf6d.jpg
│-- ...
For CrowdPose data, please download from CrowdPose. Please download the annotation files from crowdpose_annotations. For top-down approaches, we follow CrowdPose to use the pre-trained weights of YOLOv3 to generate the detected human bounding boxes. For model training, we follow HigherHRNet to train models on CrowdPose train/val dataset, and evaluate models on CrowdPose test dataset. Download and extract them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── crowdpose
│-- annotations
│ │-- mmpose_crowdpose_train.json
│ │-- mmpose_crowdpose_val.json
│ │-- mmpose_crowdpose_trainval.json
│ │-- mmpose_crowdpose_test.json
│ │-- det_for_crowd_test_0.1_0.5.json
│-- images
│-- 100000.jpg
│-- 100001.jpg
│-- 100002.jpg
│-- ...
For PoseTrack18 data, please download from PoseTrack18. Please download the annotation files from posetrack18_annotations. We have merged the video-wise separated official annotation files into two json files (posetrack18_train & posetrack18_val.json). We also generate the mask files to speed up training. For top-down approaches, we use MMDetection pre-trained Cascade R-CNN (X-101-64x4d-FPN) to generate the detected human bounding boxes. Please download and extract them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── posetrack18
│-- annotations
│ │-- posetrack18_train.json
│ │-- posetrack18_val.json
│ │-- posetrack18_val_human_detections.json
│ │-- train
│ │ │-- 000001_bonn_train.json
│ │ │-- 000002_bonn_train.json
│ │ │-- ...
│ │-- val
│ │ │-- 000342_mpii_test.json
│ │ │-- 000522_mpii_test.json
│ │ │-- ...
│ `-- test
│ │-- 000001_mpiinew_test.json
│ │-- 000002_mpiinew_test.json
│ │-- ...
│
`-- images
│ │-- train
│ │ │-- 000001_bonn_train
│ │ │ │-- 000000.jpg
│ │ │ │-- 000001.jpg
│ │ │ │-- ...
│ │ │-- ...
│ │-- val
│ │ │-- 000342_mpii_test
│ │ │ │-- 000000.jpg
│ │ │ │-- 000001.jpg
│ │ │ │-- ...
│ │ │-- ...
│ `-- test
│ │-- 000001_mpiinew_test
│ │ │-- 000000.jpg
│ │ │-- 000001.jpg
│ │ │-- ...
│ │-- ...
`-- mask
│-- train
│ │-- 000002_bonn_train
│ │ │-- 000000.jpg
│ │ │-- 000001.jpg
│ │ │-- ...
│ │-- ...
`-- val
│-- 000522_mpii_test
│ │-- 000000.jpg
│ │-- 000001.jpg
│ │-- ...
│-- ...
The official evaluation tool for PoseTrack should be installed from GitHub.
pip install git+https://github.com/svenkreiss/poseval.git
For OCHuman data, please download the images and annotations from OCHuman, Move them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── ochuman
│-- annotations
│ │-- ochuman_coco_format_val_range_0.00_1.00.json
│ |-- ochuman_coco_format_test_range_0.00_1.00.json
|-- images
│-- 000001.jpg
│-- 000002.jpg
│-- 000003.jpg
│-- ...
For sub-JHMDB data, please download the images from JHMDB, Please download the annotation files from jhmdb_annotations. Move them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── jhmdb
│-- annotations
│ │-- Sub1_train.json
│ |-- Sub1_test.json
│ │-- Sub2_train.json
│ |-- Sub2_test.json
│ │-- Sub3_train.json
│ |-- Sub3_test.json
|-- Rename_Images
│-- brush_hair
│ │--April_09_brush_hair_u_nm_np1_ba_goo_0
| │ │--00001.png
| │ │--00002.png
│-- catch
│-- ...
For OneHand10K data, please download from OneHand10K Dataset. Please download the annotation files from onehand10k_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── onehand10k
|── annotations
| |── onehand10k_train.json
| |── onehand10k_test.json
`── Train
| |── source
| |── 0.jpg
| |── 1.jpg
| ...
`── Test
|── source
|── 0.jpg
|── 1.jpg
For FreiHAND data, please download from FreiHand Dataset. Since the official dataset does not provide validation set, we randomly split the training data into 8:1:1 for train/val/test. Please download the annotation files from freihand_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── onehand10k
|── annotations
| |── freihand_train.json
| |── freihand_val.json
| |── freihand_test.json
`── training
|── rgb
| |── 00000000.jpg
| |── 00000001.jpg
| ...
|── mask
|── 00000000.jpg
|── 00000001.jpg
...
For CMU Panoptic HandDB, please download from CMU Panoptic HandDB. Following Simon et al, panoptic images (hand143_panopticdb) and MPII & NZSL training sets (manual_train) are used for training, while MPII & NZSL test set (manual_test) for testing. Please download the annotation files from panoptic_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── panoptic
|── annotations
| |── panoptic_train.json
| |── panoptic_test.json
|
`── hand143_panopticdb
| |── imgs
| | |── 00000000.jpg
| | |── 00000001.jpg
| | ...
|
`── hand_labels
|── manual_train
| |── 000015774_01_l.jpg
| |── 000015774_01_r.jpg
| ...
|
`── manual_test
|── 000648952_02_l.jpg
|── 000835470_01_l.jpg
...
For InterHand2.6M, please download from InterHand2.6M. Please download the annotation files from annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── interhand2.6m
|── annotations
| |── all
| |── human_annot
| |── machine_annot
| |── skeleton.txt
| |── subject.txt
|
`── images
| |── train
| | |-- Capture0 ~ Capture26
| |── val
| | |-- Capture0
| |── test
| | |-- Capture0 ~ Capture7