This document aims to guide users with recommended and advanced workflows to build and use Holoscan SDK. This is generally not the simplest way to use the SDK, so make sure to review the project README before getting started.
⚠️ Disclaimer: we only recommend building the SDK from source if you are a developer of the SDK, or need to build the SDK with debug symbols or other options not used as part of the published packages.
- If you want to write your own operator or application, you can use the SDK as a dependency (and contribute to HoloHub).
- If you need to make other modifications to the SDK, file a feature or bug request.
- Refer to the Holoscan SDK User Guide installation instructions for guidance on installing Holoscan SDK from published packages.
- Prerequisites for each supported platform are documented in the user guide.
- To build and run the SDK in a containerized environment (recommended) you'll need:
- the NVIDIA Container Toolkit v1.12.2+
- Docker, including the buildx plugin (
docker-buildx-plugin
)
Call ./run build
within the repository to build the build container and the CMake project.
-
If you encounter errors during the CMake build, you can execute
./run clear_cache
to remove cache/build/install folders -
Execute
./run build --help
for more information -
Execute
./run build --dryrun
to see the commands that will be executed -
That command can be broken-up in more granular commands also:
./run check_system_deps # ensure the system is properly configured for building ./run build_image # create the build Docker container ./run build # run the CMake configuration, build, and install steps
Call the ./run launch
command to start and enter the build container.
- You can run from the
install
orbuild
tree by passing the working directory as an argument (ex:./run launch install
) - Execute
./run launch --help
for more information - Execute
./run launch --dryrun
to see the commands that will be executed - Execute
./run launch --run-cmd "..."
to execute a bash command directly in the container
Run the examples inside the container by running their respective commands listed within each directory README file.
While the Dockerfile to build the SDK does not currently support true cross-compilation, you can compile the Holoscan SDK for the developer kits (arm64) from a x86_64 host using an emulation environment.
- Install qemu
- Clear your build cache:
./run clear_cache
- Rebuild for
linux/arm64
using--arch|-a
orHOLOSCAN_BUILD_ARCH
:./run build --arch arm64
HOLOSCAN_BUILD_ARCH=arm64 ./run build
You can then copy the install
folder generated by CMake to a developer kit with a configured environment or within a container to use for running and developing applications.
The run
script mentioned above is helpful to understand how Docker and CMake are configured and run, as commands will be printed when running it or using --dryrun
.
We recommend looking at those commands if you want to use Docker and CMake manually, and reading the comments inside the script for details about each parameter (specifically the build()
and launch()
methods).
⚠️ Disclaimer: this method of building the SDK is not actively tested or maintained. Instructions below might go out of date.
To build the Holoscan SDK on a local environment, the following versions of dev dependencies are needed (or tested). The last column refers to the stage (FROM
) in the Dockerfile where respective commands can be found to build/install these dependencies.
Dependency | Min version | Needed by | Dockerfile stage |
---|---|---|---|
CUDA | 12.6 | Core SDK | base |
gRPC | 1.54.2 | Core SDK | grpc-builder |
UCX | 1.17.0 | Core SDK | base |
GXF | 4.1.1 | Core SDK | gxf-downloader |
MOFED | 24.07 | ConnectX | mofed-installer |
TensorRT | 10.3 | Inference operator | base |
NVPL | 24.03 | LibTorch | build |
ONNX Runtime | 1.18.1 | Inference operator | onnxruntime-downloader |
LibTorch | 2.5.0 | Inference operator (torch plugin) |
torch-downloader-[x86_64|arm64] |
TorchVision | 0.20.0 | Inference operator (torch plugin) |
torchvision-downloader-[x86_64|arm64] |
Vulkan SDK | 1.3.216 | Holoviz operator | vulkansdk-builder |
Vulkan loader and validation layers |
1.3.204 | Holoviz operator | dev |
spirv-tools | 2022.1 | Holoviz operator | dev |
V4L2 | 1.22.1 | V4L2 operator | dev |
CMake | 3.24.0 | Build process | build-tools |
Patchelf | N/A | Build process | build-tools |
Note: refer to the Dockerfile for other dependencies which are not needed to build, but might be needed for:
- runtime (openblas/mkl for torch, egl for headless rendering, cloudpickle for distributed python apps, cupy for some examples...)
- testing (valgrind, pytest, xvfb...)
- utilities (v4l-utils, ...)
For CMake to find these dependencies, install them in default system paths, or pass CMAKE_PREFIX_PATH
, CMAKE_LIBRARY_PATH
, and/or CMAKE_INCLUDE_PATH
during configuration.
# Configure
cmake -S $source_dir -B $build_dir \
-G Ninja \
-D CMAKE_BUILD_TYPE=Release \
-D CUDAToolkit_ROOT:PATH="/usr/local/cuda"
# Build
cmake --build $build_dir -j
# Install
cmake --install $build_dir --prefix $install_dir
The commands to run the examples are then the same as in the dockerized environment, and can be found in the respective source directory READMEs.
There are multiple containers associated with Holoscan:
- The build container generated by the top-level Dockerfile is designed to pull dependencies to build and test the SDK itself. The image does not contain the SDK itself, as it is mounted with during
docker run
to run the cmake build or run tests. - The development container available at NGC | Holoscan Container which includes all the development tools and libraries needed to build Holoscan applications.
- This image is ~13 GB when uncompressed. However, once a Holoscan application is created, it does not need all those same development tools just to run an application.
- To address this, a runtime container can now be generated with the runtime_docker/Dockerfile which contains only the runtime dependencies of the Holoscan SDK.
- This Dockerfile is based on the CUDA-base image, which begins with Ubuntu:22.04 and installs the CUDA runtime and Compat package.
- This image is ~8.7 GB on x86_64, and can be further reduced based on use cases (see below).
⚠️ Disclaimer: Currently iGPU is not supported by the runtime container
The run
script contains the command build_run_image
to build the runtime Holoscan SDK image:
./run build_run_image
Once this image is built, it can be run exactly as the Holoscan development container on NGC is. Simply follow the 'Running the container' instructions beginning at step #3 at NGC | Holoscan Container, but replace ${NGC_CONTAINER_IMAGE_PATH}
by holoscan-sdk-run-<arch>[-<gpu>]
in step #4 (name outputted at the end of the above command).
If you have a specific application you wish to deploy, you can further reduce this runtime image size in two ways:
-
Targeting different stages of the runtime Dockerfile.
- add
--cpp
to the command above to not pull in python dependencies. - add
--cpp-no-mkl
to the command above to not pull in MKL (x86_64-only libtorch dependency) in addition to the above.
- add
-
Modifying the Dockerfile
The runtime Dockerfile is thoroughly documented to indicate which dependency is used by which component of the Holoscan SDK. If you do not use some of these components (ex: Torch inference backend, ONNX Runtime inference backend, TensorRT inference backend, Python/Cupy, format_converter operator, etc...), comment out the appropriate line in the Dockerfile and run the build command above.
Some utilities are available in the scripts
folder, others closer to the built process are listed below:
Existing tests are using GTest for C++ and pytest for Python, and can be found under tests and python/tests respectively. The Holoscan SDK uses CTest as a framework to build and execute these tests.
Run the tests using the following command:
./run test
Note: Run
run test --help
to see additional options.
Run the following command to run various linting tools on the repository:
./run lint # optional: specify directories
Note: Run
run lint --help
to see the list of tools that are used. If a lint command fails due to a missing module or executable on your system, you can install it usingpython3 -m pip install <tool>
.
The source of the user guide hosted at https://docs.nvidia.com/holoscan/sdk-user-guide is located in docs. It can be built with the following commands:
- PDF:
./run build_pdf
- HTML:
./run build_html
(auto-reload:./run live_html
)
Run ./run help
for more commands related to the user guide documentation.
Visual Studio Code can be utilized to develop the Holoscan SDK. The .devcontainer
folder holds the configuration for setting up a development container with all necessary tools and libraries installed.
The ./run
script contains vscode
and vscode_remote
commands for launching Visual Studio Code in a container or from a remote machine, respectively.
- To launch Visual Studio Code in a dev container, use
./run vscode
(-j <# of workers>
or--parallel <# of workers>
can be used to specify the number of parallel jobs to run during the build process). For more information, refer to the instructions from./run vscode -h
. - To attach to an existing dev container from a remote machine, use
./run vscode_remote
. For more information, refer to the instructions from./run vscode_remote -h
.
Once Visual Studio Code is launched, the development container will be built and the recommended extensions will be installed automatically, along with CMake being configured.
For manual configuration of CMake, open the command palette (Ctrl + Shift + P
) and run the CMake: Configure
command.
The source code in the development container can be built by either pressing Ctrl + Shift + B
or executing Tasks: Run Build Task
from the command palette (Ctrl + Shift + P
).
To debug the source code in the development container, open the Run and Debug
view (Ctrl + Shift + D
), select a debug configuration from the dropdown list, and press F5
to initiate debugging.