Configure the virtualenv e.g. mkvirtualenv gdon
and pip install -r requirements.txt
.
The data is already available on the Entropy server under /scidatasm/dense_object_nets
.
Alternatively, you can download new data as below:
python config/download_pdc_data.py config/dense_correspondence/dataset/composite/caterpillar_upright.yaml <full_path_to_data_location>
Build the docker image on your local machine:
cd pytorch-dense-correspondence
git submodule update --init --recursive
cd docker
./docker_build.py
Sing up for Docker Hub: https://hub.docker.com/signup.
This will be needed to download docker images on the Entropy cluster.
Login docker login
on your machine using the created credentials
Then tag and push your docker to Docker Hub:
docker tag <docker-id-eg-f69d0749ca1e> <hub-user>/<repo-name>:<tag>
docker push <hub-user>/<repo-name>:<tag>
Login into the Entropy cluster <user_name>@entropy.mimuw.edu.pl
.
Download the docker image using Singularity:
singularity pull /results/$USER/gdon_latest.sif docker://<hub-user>/<repo-name>
Comment: Pulling creates a .sif file, which compresses all docker layers to a single SIF file.
This is a heavy file (~4GB). However, we also want to access the SIF file on worker nodes.
Therefore, we need to save the SIF file to a directory that is synced with the worker nodes.
These are /results
(5GB limit for students) and /scidatasm
(sync every 10min).
Fow now, the scripts expect the SIF file under /results/$USER/gdon_latest.sif
.
Upload the code to the Entropy cluster under /results/$USER/
.
You can do this by setting up SSH Agent Forwarding and cloning this repository from
GitHub, see this for more information.
We store the code under /results
dir because the code also needs to be available for the worker nodes.
Comment: A convenient way is to develop on your local machine (with IDE and git access etc.)
and deploy small incremental changes to the Entropy server.
When using PyCharm, it is very convenient to configure automatic deployment of your changes to the Entropy server.
You can do this under Tools -> Deployment -> Configureation
.
Select SFTP
and OpenSSH config and authentication agent
.
After cloning the repo to the server run on the server:
cd general-dense-object-nets
git submodule update --init --recursive
You then need to add the .env
file in config with the Neptun setup. See the Logging section below.
Now you are ready to submit your job using
bash run_batch.sh
from the code directory.
You can see the status of your jobs using squeue
and the logs under
/results/$USER/train_gdon_log.txt
.
Alternatively, you can run an interactive job using the run.sh
script.
You can access the Jupyter and Tensorboard running on the Entropy cluster by setting the tunnel e.g.:
ssh -N -L 8888:localhost:8888 <user>@entropy.mimuw.edu.pl
For additional information refer to the document with the project description: https://docs.google.com/document/d/1Cq5LK8KdpZXHa9k9BCUp3NHovZnRnwo60e0jzbM_y18/edit?usp=sharing
Login node:
srun --partition=common --qos=8gpu3d --gres=gpu:1 --cpus-per-task 8 --nodelist=asusgpu4 --pty bash
Compute node:
bash /results/$USER/general-dense-object-nets/singularity_exec.sh -i /results/$USER/gdon_latest.sif
In training config file add some metadata for AP Loss and it's sampling strategy. Example:
loss_function:
name: 'aploss'
nq: 25
num_samples: 150
sampler:
name: 'random' # choice: {'don', 'ring', 'random'}
Some sampling strategies require additional params (ex ring
startegy)
loss_function:
name: 'aploss'
nq: 25
num_samples: 150
sampler:
name: 'ring' # choice: {'don', 'ring', 'random'}
inner_radius: 20
outter_radius: 30
or
Currently we support logging to Neptune. Sign up here https://neptune.ai/.
Create config/.env
with the following and paste your key there:
export NEPTUNE_API_TOKEN="YOUR KEY"
In training config file add some metadata for logging. Example:
logging:
backend: 'neptune'
username: 'tgasior'
project: 'general-dense-object-nets'
experiment: 'shoes'
description: 'This is example description'
tags: # list as many tags you want. They intend to help you search/filter experiemnts
- 'general-dense-object-nets'
- 'tomek'
- 'aploss'
- September 4, 2018: Tutorial and data now available! We have a tutorial now available here, which walks through step-by-step of getting this repo running.
- June 26, 2019: We have updated the repo to pytorch 1.1 and CUDA 10. For code used for the experiments in the paper see here.
In this project we learn Dense Object Nets, i.e. dense descriptor networks for previously unseen, potentially deformable objects, and potentially classes of objects:
We also demonstrate using Dense Object Nets for robotic manipulation tasks:
This is the reference implementation for our paper:
Pete Florence*, Lucas Manuelli*, Russ Tedrake
Abstract: What is the right object representation for manipulation? We would like robots to visually perceive scenes and learn an understanding of the objects in them that (i) is task-agnostic and can be used as a building block for a variety of manipulation tasks, (ii) is generally applicable to both rigid and non-rigid objects, (iii) takes advantage of the strong priors provided by 3D vision, and (iv) is entirely learned from self-supervision. This is hard to achieve with previous methods: much recent work in grasping does not extend to grasping specific objects or other tasks, whereas task-specific learning may require many trials to generalize well across object configurations or other tasks. In this paper we present Dense Object Nets, which build on recent developments in self-supervised dense descriptor learning, as a consistent object representation for visual understanding and manipulation. We demonstrate they can be trained quickly (approximately 20 minutes) for a wide variety of previously unseen and potentially non-rigid objects. We additionally present novel contributions to enable multi-object descriptor learning, and show that by modifying our training procedure, we can either acquire descriptors which generalize across classes of objects, or descriptors that are distinct for each object instance. Finally, we demonstrate the novel application of learned dense descriptors to robotic manipulation. We demonstrate grasping of specific points on an object across potentially deformed object configurations, and demonstrate using class general descriptors to transfer specific grasps across objects in a class.
If you find this code useful in your work, please consider citing:
@article{florencemanuelli2018dense,
title={Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation},
author={Florence, Peter and Manuelli, Lucas and Tedrake, Russ},
journal={Conference on Robot Learning},
year={2018}
}
To prevent the repo from growing in size, recommend always "restart and clear outputs" before committing any Jupyter notebooks. If you'd like to save what your notebook looks like, you can always "download as .html", which is a great way to snapshot the state of that notebook and share.