🤸♂️💥🚗 Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective [video]
conda create -n PVCP_env python=3.7
conda activate PVCP_env
# Please install PyTorch according to your CUDA version.
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
Some of our code and dependencies was adapted from MotionBERT.
We have provided a special Tool for SMPL annotation: SMPL_Tools.
Download the PVCP Dataset (≈43G). Directory structure:
PVCP
├── annotation
│ ├── dataset_2dpose.json
│ ├── dataset_mesh (coming soon).json
│ ├── mb_input_det_pose.json
│ ├── train_test_seq_id_list.json
│ ├── mesh_det_pvcp_train_release (coming soon).pkl
│ └── mesh_det_pvcp_train_gt2d_test_det2d (coming soon).pkl
├── frame
│ └── image2frame.py
├── image
│ ├── S000_1280x720_F000000_T000000.png
│ ├── S000_1280x720_F000001_T000001.png
│ ├── S000_1280x720_F000002_T000002.png
│ ├── ...
│ └── S208_1584x660_F000207_T042510.png
├── video
│ ├── S000_1280x720.mp4
│ ├── S001_1280x720.mp4
│ ├── S002_1280x720.mp4
│ ├── ...
│ └── S208_1584x660.mp4
└── vis_2dkpt_ann.mp4
For the frame
folder, run image2frame.py
. The folder structure is as follows:
├── frame
├── frame_000000.png
├── frame_000001.png
├── frame_000002.png
├── ...
└── frame_042510.png
-
We are working on more refined gesture labeling.
-
We will add more types of annotation information.
-
...
PVCP
├── checkpoint
├── configs
│ ├── mesh
│ └── pretrain
├── data
│ ├── mesh
│ └── pvcp
├── lib
│ ├── data
│ ├── model
│ └── utils
├── params
├── tools
├── LICENSE
├── README_MotionBERT.md
├── requirements.txt
├── train_mesh_pvcp.py
└── infer_wild_mesh_list.py
- Download the other datasets here and put them to
data/mesh/
. We use Human3.6M, COCO, and PW3D for training and testing. Descriptions of the joint regressors could be found in SPIN. - Download the SMPL model(
basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
) from SMPLify, put it todata/mesh/
, and rename it asSMPL_NEUTRAL.pkl
- Download the
PVCP dataset
and put them todata/pvcp/
. mvmesh_det_pvcp_train_release.pkl
andmesh_det_pvcp_train_gt2d_test_det2d.pkl
todata/mesh/
.
- You can also skip the above steps and download our data (include PVCP Dataset) and checkpoint folders directly. Final,
data
directory structure as follows:├── data ├── mesh │ ├── J_regressor_extra.npy │ ├── J_regressor_h36m_correct.npy │ ├── mesh_det_coco.pkl │ ├── mesh_det_h36m.pkl │ ├── mesh_det_pvcp_train_gt2d_test_det2d.pkl │ ├── mesh_det_pvcp_train_release.pkl │ ├── mesh_det_pw3d.pkl │ ├── mesh_hybrik.zip │ ├── smpl_mean_params.npz │ └── SMPL_NEUTRAL.pkl └── pvcp ├── annotation │ ├── dataset_2dpose.json │ ├── dataset_mesh (coming soon).json │ ├── mb_input_det_pose.json │ ├── train_test_seq_id_list.json │ ├── mesh_det_pvcp_train_release (coming soon).pkl │ └── mesh_det_pvcp_train_gt2d_test_det2d (coming soon).pkl ├── frame ├── image └── video
Finetune from a pretrained model with PVCP
CUDA_VISIBLE_DEVICES=0,1,2,3 python train_mesh_pvcp.py \
--config configs/mesh/MB_ft_pvcp.yaml \
--pretrained checkpoint/pretrain/MB_release \
--checkpoint checkpoint/mesh/ft_pvcp_iter3_class0.1_gt_release
CUDA_VISIBLE_DEVICES=0,1,2,3 python train_mesh_pvcp.py \
--config configs/mesh/MB_ft_pvcp.yaml \
--evaluate checkpoint/mesh/ft_pvcp_iter3_class0.1_gt_release/best_epoch.bin
python infer_wild_mesh_list.py --out_path output/
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
@inproceedings{
wang2024pedestriancentric,
title={Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective},
author={MeiJun Wang and Yu Meng and Zhongwei Qiu and Chao Zheng and Yan Xu and Xiaorui Peng and Jian Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ldvfaYzG35}
}