Skip to content

Learning with 3D rotations, a hitchhiker’s guide to SO(3) - ICML 2024

License

Notifications You must be signed in to change notification settings

martius-lab/hitchhiking-rotations

Repository files navigation


Hitchhiking Rotations

A. René Geist, Jonas Frey, Mikel Zhobro, Anna Levina, and Georg Martius

OverviewInstallationExperimentsDevelopment

Overview

Our work discusses recent trends on neural network regression with 3D rotations.

While the choice of loss function is important for learning with rotations, we illustrate that the choice of rotation representation (e.g., Euler angles, exponential coordinates, axis-angle, quaternions) is crucial.

Our recommendations for neural network regression with 3D rotations:

  • Changing the loss does not fix discontinuities representations with three or four parameters introduce discontinuities into the target function when rotations are in the output. The subsequent issues arising in learning the target function are not fixed using distance picking or computing distances in $\mathrm{SO}(3)$.

  • For rotation estimation (rotations in model output) use $\mathbb{R}^9+\mathrm{SVD}$ or $\mathbb{R}^6+\mathrm{GSO}$. If the regression targets are only small rotations, using quaternions with a halfspace-map is a good option.

  • For feature prediction (rotations in model input) use rotation matrices. If under memory constraints, quaternions with a halfspace-map and data-augmentation are viable.

Note

To support these recommendations, we conducted several experiments and reproduced the results of previous works. To reproduce the paper's results, setup the environment as detailed in Installation and follow the instructions in Experiments.

Installation

(virtual environment or just list of dependencies) (using git lsf to get datasets and our checkpoints/models)

git clone [email protected]:martius-lab/hitchhiking-rotations.git
pip3 install -e ./
pip3 install torch torchvision torchaudio

Experiments

All experiments are implemented in PyTorch. A few scripts used to create figures use JAX. Most experiments simply use train.py. Depending on which command line arguments are passed (see below), train.py runs different neural network regression tasks via hydra.

The repo is organized as follows:

  • ./hitchhiking_rotations contains rotation representation helper functions, config files, data generators, loss functions, data loaders, and models.
  • ./assets/datasets contains the datasets used in the experiments. By default, the data inside the folders is used to train models. If you want to generate new data using the scripts in hitchhiking_rotations/datasets, just delete the files in this folder.
  • ./assets/results contains trained models, plots, and learning results that have been stored using logger.py.
  • ./visu contains scripts to visualize the results of the experiments and reproduce figures.

Data

To reproduce the paper's experiments, download the data, and save it in the assets/datasets folder.

Note

The data is available here: link

Experiment 1, 2.1, and 2.2

Experiment Type <EXPERIMENT-NAME>
1: Rotation from point clouds Rotation estimation "pcd_to_pose"
2.1: Cube rotation from images Rotation estimation "cube_image_to_pose"
2.2: Cube rotation to images Feature prediction "pose_to_cube_image"
python scripts/train.py --experiment <EXPERIMENT-NAME>

Experiment 3: 6D object pose estimation

Experiment 3 has its own repository.

Experiment 4: SO(3) as input to Fourier series

for nb in {1..5}; do for seed in {1..20}; do python scripts/train.py --seed $seed --experiment "pose_to_fourier_$nb"; done; done

Plots

To reproduce the paper's figures, run the following commands:

Figures Console command
Experiment 1 python visu/figure_pointcloud.py
Experiment 2.1 python visu/figure_exp2a.py
Experiment 2.2 python visu/figure_exp2b.py
Experiment 3 python visu/figure_posenet.py
Experiment 4 python visu/figure_fourier.py
Figure 6 & 20: Lipschitz constants python visu/lipschitz_constants.py
Figure 10: Training GSO & SVD python visu/gso_vs_svd_animation.py
Figure 11: Gradient ratios of SVD python visu/gso_vs_svd.py
Figure 18: Loss gradients python visu/loss_gradients.py
Figure 19: MoCap data analysis python visu/figure_mocap.py

Development

Code Formatting

pip3 install black==23.10
cd hitchhiking_rotations && black --line-length 120 ./

# Using precommit
pip3 install pre-commit
cd hitchhiking_rotations && python3 -m pre_commit install
cd hitchhiking_rotations && python3 -m pre_commit run

Add License Headers

pip3 install addheader
# If your are using zsh otherwise remove \
addheader hitchhiking_rotations -t .header.txt -p \*.py --sep-len 79 --comment='#' --sep=' '

About

Learning with 3D rotations, a hitchhiker’s guide to SO(3) - ICML 2024

Resources

License

Stars

Watchers

Forks

Languages