Skip to content

This repository provides code for a basic visual odometry pipeline

License

Notifications You must be signed in to change notification settings

Varghese-Kuruvilla/Visual-Odometry-pipeline

Repository files navigation


Visual Odometry Pipeline

Provides a basic visual odometry pipeline (feature extraction, feature matching, motion estimation) with several configurable options.

Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

About The Project

This project aims to build a simple visual odometry pipeline that can be used as a baseline for further research work. The current documentation focuses specifically on feature based stereo visual odometry.

A Short Introduction to Visual Odometry

What is Visual Odometry?

It is the estimation of the robot pose based on the sequence of images that it captures as it moves through the environment.

The basic VO pipeline

This project follows a basic visual odometry pipeline, which consists of the following steps:

  • Feature Extraction: Reliable and repeatable features need to be extracted from the image. The current version of the code support SIFT, SURF, ORB and R2D2.

  • Feature Matching: The extracted features are matched across frames using the mutual nearest neighbour algorithm.

  • Motion Estimation: Motion estimation is carried out by using 3D points and their 2D correspondences (the solvePnPRansac function from OpenCV is used). For our experiments on the real robot, a ZED Mini camera was used which directly gives us the 3D points. For the Kitti dataset, we used Monodepth2 to find the corresponding depth maps.

Getting Started

Installation

Clone the repository

git clone https://github.com/Varghese-Kuruvilla/Visual-Odometry-pipeline.git

The packages required for getting started can be installed from the requirements file.

pip3 install -r requirements.txt

Usage

Running the VO code

  • The current version of the code requires the RGB images and the corresponding depth images stored in a single folder in the following manner:

    • 000000_depth.npy
    • 000000.png
    • 000001_depth.npy
    • 000001.png and so on.
  • The configuration is specified through the file vo_params.yaml. A minimal example is shown below:

vo_method: "rgbd" #Choose from monocular or rgbd. The monocular method is work in progress, rgbd is stable
feature_extractor: "r2d2" #Choose from sift, orb, r2d2

#For offline mode
#The code for realtime VO using a depth camera will be shortly added
#Folder containing the rgb images: should be of the form *.png

image_path: ""

#Camera intrinsic matrix of the form [fx,0,cx,0,fy,cy,0,0,1]
#For Kitti

camera_intrinsic_matrix:
  - 721.53 
  - 0.0 
  - 609.55
  - 0.0 
  - 721.53
  - 172.85
  - 0.0 
  - 0.0 
  - 1.0

output_filename: ../global_poses #Saved as a .npy file
visualize_results: True #Visualize the extracted features and the matches between the images

##Parameters for plotting and evaluating the ATE, RPE
#GT should be in the KITTI ground truth format

gt_txt_file_path : "../plot_utils/data/03.txt"
#The poses file is automatically generated on running vo_runner.py
poses_file_path : "../plot_utils/data/global_poses.npy"
  • The VO pipeline can be run from the vo_runner python script:
python3 vo_runner.py

Plotting the trajectories and computing the absolute translational error(ATE) and relative position error(RPE)

  • Run the script plot_traj.py(in plot_utils) to plot the ground truth and position of the vehicle as estimated by visual odometry
python3 plot_traj.py
  • Run the script prepare_data.py(in plot_utils) to prepare the data for estimating the ATE and the RPE.
python3 prepare_data.py
  • Run the script kittievalodom.py to estimate both the ATE and the RPE

(back to top)

(back to top)

Roadmap

  • Add offline stereo visual odometry
  • Add scripts for plotting trajectories
  • Add scripts for error computation
  • Add realtime stereo visual odometry
  • Add monocular visual odometry

Contributing

Any contributions to this project are encouraged and are greatly appreciated. If you have a suggestion that would improve the project, please fork the project and create a pull request. You can also open as issue. Don't forget to give the project a star! Thanks!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Distributed under the MIT License. See License for more information.

Contact

Project Link: Project Link

(back to top)

Acknowledgments

(back to top)

About

This repository provides code for a basic visual odometry pipeline

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published