Skip to content

Latest commit

 

History

History

vrgripper

VRGripper Environment Models

Contains code for training models in the VRGripper environment from "Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards."

Includes the models used in the Watch, Try, Learn (WTL) gripping experiments.

Links

Authors

Allan Zhou1, Eric Jang1, Daniel Kappler2, Alex Herzog2, Mohi Khansari2, Paul Wohlhart2, Yunfei Bai2, Mrinal Kalakrishnan2, Sergey Levine1,3, Chelsea Finn1

1 Google Brain, 2X, 3UC Berkeley

Training the WTL gripping experiment models.

WTL experiment models are located in vrgripper_env_wtl_models.py. Data is not included in this repository, so you will have to provide your own training/eval datasets. Training is configured by the following gin configs:

  • configs/run_train_wtl_statespace_trial.gin: Train a trial policy on state-space observations.
  • configs/run_train_wtl_statespace_retrial.gin: Train a retrial policy on state-space observations.
  • configs/run_train_wtl_vision_trial.gin: Train a trial policy on image observations.
  • configs/run_train_wtl_vision_retrial.gin: Train a retrial policy on image observations.