- Download the finetuned Stacked Hourglass detections and our preprocessed H3.6M data here and unzip it to
data/motion3d
.
Note that the preprocessed data is only intended for reproducing our results more easily. If you want to use the dataset, please register to the Human3.6m website and download the dataset in its original format. Please refer to LCN for how we prepare the H3.6M data.
-
Slice the motion clips (len=243, stride=81)
python tools/convert_h36m.py
Train from scratch:
python train.py \
--config configs/pose3d/MB_train_h36m.yaml \
--checkpoint checkpoint/pose3d/MB_train_h36m
Finetune from pretrained MotionBERT:
python train.py \
--config configs/pose3d/MB_ft_h36m.yaml \
--pretrained checkpoint/pretrain/MB_release \
--checkpoint checkpoint/pose3d/FT_MB_release_MB_ft_h36m
Evaluate:
python train.py \
--config configs/pose3d/MB_train_h36m.yaml \
--evaluate checkpoint/pose3d/MB_train_h36m/best_epoch.bin
-
Process using Vicon-Read/caculateSkeleton.py
-
Slice the motion clips (len=243, stride=81)
python tools/convert_VEHSR3.py ` --dt_root 'W:\VEHS\VEHS data collection round 3\processed' ` --dt_file 'VEHS_3D_downsample5_keep1.pkl' ` --root_path 'data/motion3d/MB3D_VEHS_R3/3DPose' # 3D Pose
python tools/convert_VEHSR3.py ` --dt_root 'W:\VEHS\VEHS data collection round 3\processed' ` --dt_file 'VEHS_6D_downsample5_keep1.pkl' ` --root_path 'data/motion3d/MB3D_VEHS_R3/6DPose' # 6D Pose
-
copy pkl file to data/motion3d/MB3D_VEHS_R3/3DPose
copy-item -path "W:\VEHS\VEHS data collection round 3\processed\VEHS_3D_downsample5_keep1.pkl" -destination "data/motion3d/MB3D_VEHS_R3/3DPose" copy-item -path "W:\VEHS\VEHS data collection round 3\processed\VEHS_6D_downsample5_keep1.pkl" -destination "data/motion3d/MB3D_VEHS_R3/6DPose"
Finetune from pretrained MotionBERT:
python train.py ^
--config configs/pose3d/MB_ft_VEHSR3_3DPose.yaml ^
--pretrained checkpoint/pose3d/FT_MB_release_MB_ft_h36m ^
--checkpoint checkpoint/pose3d/3DPose_VEHSR3 ^
--selection best_epoch.bin ^
--resume checkpoint/pose3d/3DPose_VEHSR3/epoch_7.bin
# 3D Pose
python train.py `
--config configs/pose3d/MB_ft_VEHSR3_6DPose.yaml `
--pretrained checkpoint/pose3d/FT_MB_release_MB_ft_h36m `
--checkpoint checkpoint/pose3d/6DPose_VEHSR3 `
--selection best_epoch.bin
#--resume checkpoint/pose3d/6DPose_VEHSR3/epoch_1.bin `
# 6D Pose
Visualize the training process:
tensorboard --logdir checkpoint/pose3d/6DPose_VEHSR3/logs
Evaluate:
python train.py --config configs/pose3d/MB_train_VEHSR3.yaml --evaluate checkpoint/pose3d/MB_train_VEHSR3_3DPose/best_epoch.bin
python train.py --config configs/pose3d/MB_train_h36m.yaml --evaluate checkpoint/pose3d/MB_train_h36m/best_epoch.bin
- If gt_2d is set in config, the input 2D use the first two dimensions of the 3D pose here
- If MotionBert's prediction is in camera coord / px. Multiply by 2.5d factor to get meters and compare during test time.
- Pelvic centered:
- joint_2d is detection results in px
- joint3d_image = [gt_joint2d, depth] in px
- joint_2.5d_image = joint3d_image * 2.5d_factor (normally 4.xx)
- joint_3d_camera = joint_2.5d_image, but camera is origin center