Skip to content

Douglasmsw/DAD_SLAM

Repository files navigation

DAD SLAM: Semantic Neural Mapping for Neural Field SLAM

Semantic Neural Mapping incorporates semantic loss into NeRF training to improve interpolation and decrease data need. More details can be found in our write-up.

A note that this was a final project for TTIC 31170: Robot Learning & Estimation at the University of Chicago's Toyota Technological Institute. The paper was written in NeurIPS style but not submitted to any conference.

This project is built on top of vMAP. The original repo can be found here and we have included their citations and heading content below.


vMAP: Vectorised Object Mapping for Neural Field SLAM

Xin Kong · Shikun Liu · Marwan Taher · Andrew Davison

Logo

vMAP builds an object-level map from a real-time RGB-D input stream. Each object is represented by a separate MLP neural field model, all optimised in parallel via vectorised training.


We provide the implementation of the following neural-field SLAM frameworks:

  • vMAP [Official Implementation]
  • iMAP [Simplified and Improved Re-Implementation, with depth guided sampling]

Install

First, let's start with a virtual environment with the required dependencies.

conda env create -f environment.yml

Dataset

Please download the following datasets to reproduce our results.

  • Replica Demo - Replica Room 0 only for faster experimentation.
  • Replica - All Replica sequences.
  • ScanNet - Official ScanNet sequences. Each dataset contains a sequence of RGB-D images, as well as their corresponding camera poses, and object instance labels. To extract data from ScanNet .sens files, run
    conda activate py2
    python2 reader.py --filename ~/data/ScanNet/scannet/scans/scene0024_00/scene0024_00.sens --output_path ~/data/ScanNet/objnerf/ --export_depth_images --export_color_images --export_poses --export_intrinsics

Config

Then update the config files in configs/.json with your dataset paths, as well as other training hyper-parameters.

"dataset": {
        "path": "path/to/ims/folder/",
    }

Running DAD Map / vMAP / iMAP

The following commands will run DAD MAP / vMAP / iMAP in a single-thread setting.

DAD Map

python ./train.py --config ./configs/Replica/config_replica_room0_vMAP.json --save_ckpt True --semantic_loss True --sem_scale [SEMANTIC LOSS TERM WEIGHT AS FLOAT, default = 1]

vMAP

python ./train.py --config ./configs/Replica/config_replica_room0_vMAP.json --save_ckpt True

iMAP

python ./train.py --config ./configs/Replica/config_replica_room0_iMAP.json --save_ckpt True

Evaluation

First, run the below line to construct and save the object and scene meshes estimated during training.

python ./mesh_build.py --config [PATH TO CONFIG FILE]

To evaluate the quality of reconstructed scenes, we provide two different methods,

3D Scene-level Evaluation

The same metrics following the original iMAP, to compare with GT scene meshes by Accuracy, Completion and Completion Ratio.

python ./metric/eval_3D_scene.py

3D Object-level Evaluation

We also provide the object-level metrics by computing the same metrics but averaging across all objects in a scene.

python ./metric/eval_3D_obj.py

Results

We provide raw results, including 3D meshes, 2D novel view rendering, and evaluated metrics of vMAP and iMAP* for easier comparison.

Acknowledgement

We would like thank the following open-source repositories that we have build upon for the implementation of this work: NICE-SLAM, and functorch.

Citation

If you found this code/work to be useful in your own research, please considering citing the following:

@article{kong2023vmap,
  title={vMAP: Vectorised Object Mapping for Neural Field SLAM},
  author={Kong, Xin and Liu, Shikun and Taher, Marwan and Davison, Andrew J},
  journal={arXiv preprint arXiv:2302.01838},
  year={2023}
}
@inproceedings{sucar2021imap,
  title={iMAP: Implicit mapping and positioning in real-time},
  author={Sucar, Edgar and Liu, Shikun and Ortiz, Joseph and Davison, Andrew J},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={6229--6238},
  year={2021}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published