This tutorial demonstrates how to perform depth-camera based reconstruction using a realsense camera, NVIDIA VSLAM, and nvblox.
Note: This tutorial requires a compatible RealSense camera from the list available here.
This example is tested and compatible with realsense camera firmware version 5.13.0.50 which is available here.
Note: We experienced issues with the latest realsense firmware (version 5.14 at the time of publishing). It's possible at some point that this starts working, but our recommendation is to install exactly 5.13.0.50.
We have found ROS 2 message delivery to be unreliable under high load without some small modifications to the QoS profile (especially on weaker machines). Before running this example run
sudo sysctl -w net.core.rmem_max=8388608 net.core.rmem_default=8388608
However this only sets these parameters until reboot. To set them permanently run:
echo -e "net.core.rmem_max=8388608\nnet.core.rmem_default=8388608\n" | sudo tee /etc/sysctl.d/60-cyclonedds.conf
More details on DDS tuning can be found here.
-
Complete steps 1 and 2 described in the quickstart guide to set up your development environment and clone the required repositories.
-
Stop git tracking the
COLCON_IGNORE
file in therealsense_splitter
package and remove it.cd ${ISAAC_ROS_WS}/src/isaac_ros_nvblox/nvblox_examples/realsense_splitter && \ git update-index --assume-unchanged COLCON_IGNORE && \ rm COLCON_IGNORE
Note: The
COLCON_IGNORE
file was added to remove the dependency torealsense-ros
for users that do not want to run the RealSense examples.
-
Complete the realsense setup tutorial to set up
librealsense
outside of the isaac_ros_common docker, clonerealsense_ros
, and to configure the container for use with realsense. -
Download the code for Isaac ROS Visual SLAM and Isaac ROS NITROS.
cd ${ISAAC_ROS_WS}/src && \ git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam.git && \ git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
-
Launch the Docker container using the
run_dev.sh
script (ISAAC_ROS_WS
environment variable will take care of the correct path depending upon SD card or SSD setup as mentioned here):Note: This step requires access to the internet to be able to build and launch the Docker container properly!
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \ ./scripts/run_dev.sh ${ISAAC_ROS_WS}
-
Inside the container, install package-specific dependencies via
rosdep
:cd /workspaces/isaac_ros-dev/ && \ rosdep install -i -r --from-paths src --rosdistro humble -y --skip-keys "libopencv-dev libopencv-contrib-dev libopencv-imgproc-dev python-opencv python3-opencv nvblox"
-
Build and source the workspace:
cd /workspaces/isaac_ros-dev && \ colcon build --symlink-install && \ source install/setup.bash
-
Complete the sections above.
-
Connect the RealSense device to your machine.
-
At this point, verify that the RealSense camera device is properly connected and streaming by running
realsense-viewer
as mentioned here:realsense-viewer
-
If successful, exit
realsense-viewer
and run the launch file to spin up the example:ros2 launch nvblox_examples_bringup realsense_example.launch.py
Here is a few seconds of the result from running the example:
If you want to run the realsense example on recorded data refer to the realsense recording tutorial.
See our troubleshooting page here.