This extension provides a compact python binding (on top of the open standard OpenXR for augmented reality (AR) and virtual reality (VR)) to create extended reality applications taking advantage of NVIDIA Omniverse rendering capabilities. In addition to updating views (e.g., head-mounted display), it enables subscription to any input event (e.g., controller buttons and triggers) and execution of output actions (e.g., haptic vibration) through a simple and efficient API for accessing conformant devices such as HTC Vive, Oculus and others...
Target applications: Any NVIDIA Omniverse app with the omni.syntheticdata
extension installed (e.g., Isaac Sim, Code, etc.)
- Tested in Ubuntu 18.04/20.04, STEAMVR beta 1.24.3, Omniverse Code 2022.1.3 and Isaac Sim 2022.1.1
Supported OS: Linux
Changelog: CHANGELOG.md
Table of Contents:
- Extension setup
- Diagrams
- Sample code
- GUI launcher
- Extension API
- Acquiring extension interface
- API
init
is_session_running
create_instance
get_system
create_session
poll_events
poll_actions
render_views
subscribe_action_event
apply_haptic_feedback
stop_haptic_feedback
setup_mono_view
setup_stereo_view
get_recommended_resolutions
set_reference_system_pose
set_stereo_rectification
set_meters_per_unit
set_frame_transformations
teleport_prim
subscribe_render_event
set_frames
- Available enumerations
- Available constants
-
Add the extension using the Extension Manager or by following the steps in Extension Search Paths
-
Git url (git+https) as extension search path
git+https://github.com/Toni-SM/semu.xr.openxr.git?branch=main&dir=exts
-
Compressed (.zip) file for import
-
-
Enable the extension using the Extension Manager or by following the steps in Extension Enabling/Disabling
-
Import the extension into any python code and use it...
from semu.xr.openxr import _openxr
-
Or use the GUI launcher to directly dislpay the current stage in the HMD
High-level overview of extension usage, including the order of function calls, callbacks and the action and rendering loop
Typical OpenXR application showing the grouping of the standard functions under the compact binding provided by the extension (adapted from openxr-10-reference-guide.pdf)
The following sample code shows a typical workflow that configures and renders on a stereo headset the view generated in an Omniverse application. It configures and subscribes two input actions to the left controller to 1) mirror on a simulated sphere the pose of the controller and 2) change the dimensions of the sphere based on the position of the trigger. In addition, an output action, a haptic vibration, is configured and executed when the controller trigger reaches its maximum position
A short video, after the code, shows a test of the OpenXR application from the Script Editor using an HTC Vive Pro
import omni
from pxr import UsdGeom
from semu.xr.openxr import _openxr
# get stage unit
stage = omni.usd.get_context().get_stage()
meters_per_unit = UsdGeom.GetStageMetersPerUnit(stage)
# create a sphere (1 centimeter radius) to mirror the controller's pose
sphere_prim = omni.usd.get_context().get_stage().DefinePrim("/sphere", "Sphere")
sphere_prim.GetAttribute("radius").Set(0.01 / meters_per_unit)
# acquire interface
xr = _openxr.acquire_openxr_interface()
# setup OpenXR application using default parameters
xr.init()
xr.create_instance()
xr.get_system()
# action callback
def on_action_event(path, value):
# process controller's trigger
if path == "/user/hand/left/input/trigger/value":
# modify the sphere's radius (from 1 to 10 centimeters) according to the controller's trigger position
sphere_prim.GetAttribute("radius").Set((value * 9 + 1) * 0.01 / meters_per_unit)
# apply haptic vibration when the controller's trigger is fully depressed
if value == 1:
xr.apply_haptic_feedback("/user/hand/left/output/haptic", {"duration": _openxr.XR_MIN_HAPTIC_DURATION})
# mirror the controller's pose on the sphere (cartesian position and rotation as quaternion)
elif path == "/user/hand/left/input/grip/pose":
xr.teleport_prim(sphere_prim, value[0], value[1])
# subscribe controller actions (haptic actions don't require callbacks)
xr.subscribe_action_event("/user/hand/left/input/grip/pose", callback=on_action_event, reference_space=_openxr.XR_REFERENCE_SPACE_TYPE_LOCAL)
xr.subscribe_action_event("/user/hand/left/input/trigger/value", callback=on_action_event)
xr.subscribe_action_event("/user/hand/left/output/haptic")
# create session and define interaction profiles
xr.create_session()
# setup cameras and viewports and prepare rendering using the internal callback
xr.set_meters_per_unit(meters_per_unit)
xr.setup_stereo_view()
xr.set_frame_transformations(flip=0)
xr.set_stereo_rectification(y=0.05)
# execute action and rendering loop on each simulation step
def on_simulation_step(step):
if xr.poll_events() and xr.is_session_running():
xr.poll_actions()
xr.render_views(_openxr.XR_REFERENCE_SPACE_TYPE_LOCAL)
physx_subs = omni.physx.get_physx_interface().subscribe_physics_step_events(on_simulation_step)
snippet.mp4
The extension also provides a graphical user interface that helps to launch a partially configurable OpenXR application form a window. This interface is located in the Add-ons > OpenXR UI menu
The first four options (Graphics API, Form factor, Blend mode, View configuration type) cannot be modified once the OpenXR application is running. They are used to create and configure the OpenXR instance, system and session
The other options (under the central separator) can be modified while the application is running. They help to modify the pose of the reference system, or to perform transformations on the images to be rendered, for example.
-
Acquire OpenXR interface
_openxr.acquire_openxr_interface() -> semu::xr::openxr::OpenXR
-
Release OpenXR interface
_openxr.release_openxr_interface(xr: semu::xr::openxr::OpenXR) -> None
The following functions are provided on the OpenXR interface:
-
Init OpenXR application by loading the necessary libraries
init(graphics: str = "OpenGL", use_ctypes: bool = False) -> bool
Parameters:
-
graphics:
str
OpenXR graphics API supported by the runtime (OpenGL, OpenGLES, Vulkan, D3D11, D3D12). Note: At the moment only OpenGL is available
-
use_ctypes:
bool
, optionalIf true, use ctypes as C/C++ interface instead of pybind11 (default)
Returns:
-
bool
True
if initialization was successful, otherwiseFalse
-
-
Get OpenXR session's running status
is_session_running() -> bool
Returns:
-
bool
Return
True
if the OpenXR session is running,False
otherwise
-
-
Create an OpenXR instance to allow communication with an OpenXR runtime
create_instance(application_name: str = "Omniverse (XR)", engine_name: str = "", api_layers: list = [], extensions: list = []) -> bool
Parameters:
-
application_name:
str
, optionalName of the OpenXR application (default: Omniverse (XR))
-
engine_name:
str
, optionalName of the engine (if any) used to create the application (empty by default)
-
api_layers:
list
ofstr
, optionalAPI layers to be inserted between the OpenXR application and the runtime implementation
-
extensions:
list
ofstr
, optionalExtensions to be loaded. Note: The graphics API selected during initialization (init) is automatically included in the extensions to be loaded. At the moment only the graphic extensions are configured
Returns:
-
bool
True
if the instance has been created successfully, otherwiseFalse
-
-
Obtain the system represented by a collection of related devices at runtime
get_system(form_factor: int = XR_FORM_FACTOR_HEAD_MOUNTED_DISPLAY, blend_mode: int = XR_ENVIRONMENT_BLEND_MODE_OPAQUE, view_configuration_type: int = XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO) -> bool
Parameters:
-
form_factor: {
XR_FORM_FACTOR_HEAD_MOUNTED_DISPLAY
,XR_FORM_FACTOR_HANDHELD_DISPLAY
}, optionalDesired form factor from
XrFormFactor
enum (default:XR_FORM_FACTOR_HEAD_MOUNTED_DISPLAY
) -
blend_mode: {
XR_ENVIRONMENT_BLEND_MODE_OPAQUE
,XR_ENVIRONMENT_BLEND_MODE_ADDITIVE
,XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND
}, optionalDesired environment blend mode from
XrEnvironmentBlendMode
enum (default:XR_ENVIRONMENT_BLEND_MODE_OPAQUE
) -
view_configuration_type: {
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_MONO
,XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO
}, optionalPrimary view configuration type from
XrViewConfigurationType
enum (default:XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO
)
Returns:
-
bool
True
if the system has been obtained successfully, otherwiseFalse
-
-
Create an OpenXR session that represents an application's intention to display XR content
create_session() -> bool
Returns:
-
bool
True
if the session has been created successfully, otherwiseFalse
-
-
Event polling and processing
poll_events() -> bool
Returns:
-
bool
False
if the running session needs to end (due to the user closing or switching the application, etc.), otherwiseFalse
-
-
Action polling
poll_actions() -> bool
Returns:
-
bool
True
if there is no error during polling, otherwiseFalse
-
-
Present rendered images to the user's views according to the selected reference space
render_views(reference_space: int = XR_REFERENCE_SPACE_TYPE_LOCAL) -> bool
Parameters:
-
reference_space: {
XR_REFERENCE_SPACE_TYPE_VIEW
,XR_REFERENCE_SPACE_TYPE_LOCAL
,XR_REFERENCE_SPACE_TYPE_STAGE
}, optionalDesired reference space type from
XrReferenceSpaceType
enum used to render the images (default:XR_REFERENCE_SPACE_TYPE_LOCAL
)
Returns:
-
bool
True
if there is no error during rendering, otherwiseFalse
-
-
Create an action given a path and subscribe a callback function to the update event of this action
subscribe_action_event(path: str, callback: Union[Callable[[str, object], None], None] = None, action_type: Union[int, None] = None, reference_space: Union[int, None] = XR_REFERENCE_SPACE_TYPE_LOCAL) -> bool
If
action_type
isNone
the action type will be automatically defined by parsing the last segment of the path according to the following policy:Action type ( XrActionType
)Last segment of the path XR_ACTION_TYPE_BOOLEAN_INPUT
/click, /touch XR_ACTION_TYPE_FLOAT_INPUT
/value, /force XR_ACTION_TYPE_VECTOR2F_INPUT
/x, /y XR_ACTION_TYPE_POSE_INPUT
/pose XR_ACTION_TYPE_VIBRATION_OUTPUT
/haptic, /haptic_left, /haptic_right, /haptic_left_trigger, /haptic_right_trigger The callback function (a callable object) should have only the following 2 parameters:
-
path:
str
The complete path (user path and subpath) of the action that invokes the callback
-
value:
bool
,float
,tuple(float, float)
,tuple(pxr.Gf.Vec3d, pxr.Gf.Quatd)
The current state of the action according to its type
Action type ( XrActionType
)python type XR_ACTION_TYPE_BOOLEAN_INPUT
bool
XR_ACTION_TYPE_FLOAT_INPUT
float
XR_ACTION_TYPE_VECTOR2F_INPUT
(x, y)tuple(float, float)
XR_ACTION_TYPE_POSE_INPUT
(position (in stage unit), rotation as quaternion)tuple(pxr.Gf.Vec3d, pxr.Gf.Quatd)
XR_ACTION_TYPE_VIBRATION_OUTPUT
actions will not invoke their callback function. In this case the callback must be NoneXR_ACTION_TYPE_POSE_INPUT
also specifies, through the definition of the reference_space parameter, the reference space used to retrieve the poseThe collection of available paths corresponds to the following interaction profiles:
- Khronos Simple Controller
- Google Daydream Controller
- HTC Vive Controller
- HTC Vive Pro
- Microsoft Mixed Reality Motion Controller
- Microsoft Xbox Controller
- Oculus Go Controller
- Oculus Touch Controller
- Valve Index Controller
Parameters:
-
path:
str
Complete path (user path and subpath) referring to the action
-
callback: callable object (2 parameters) or
None
forXR_ACTION_TYPE_VIBRATION_OUTPUT
Callback invoked when the state of the action changes
-
action_type: {
XR_ACTION_TYPE_BOOLEAN_INPUT
,XR_ACTION_TYPE_FLOAT_INPUT
,XR_ACTION_TYPE_VECTOR2F_INPUT
,XR_ACTION_TYPE_POSE_INPUT
,XR_ACTION_TYPE_VIBRATION_OUTPUT
} orNone
, optionalAction type from
XrActionType
enum (default:None
) -
reference_space: {
XR_REFERENCE_SPACE_TYPE_VIEW
,XR_REFERENCE_SPACE_TYPE_LOCAL
,XR_REFERENCE_SPACE_TYPE_STAGE
}, optionalDesired reference space type from
XrReferenceSpaceType
enum used to retrieve the pose (default:XR_REFERENCE_SPACE_TYPE_LOCAL
)
Returns
-
bool
True
if there is no error during action creation, otherwiseFalse
-
-
Apply a haptic feedback to a device defined by a path (user path and subpath)
apply_haptic_feedback(path: str, haptic_feedback: dict) -> bool
Parameters:
-
path:
str
Complete path (user path and subpath) referring to the action
-
haptic_feedback:
dict
A python dictionary containing the field names and value of a
XrHapticBaseHeader
-based structure. Note: At the moment the only haptics type supported is the unextended OpenXR XrHapticVibration
Returns:
-
bool
True
if there is no error during the haptic feedback application, otherwiseFalse
-
-
Stop a haptic feedback applied to a device defined by a path (user path and subpath)
stop_haptic_feedback(path: str) -> bool
Parameters:
-
path:
str
Complete path (user path and subpath) referring to the action
Returns:
-
bool
True
if there is no error during the haptic feedback stop, otherwiseFalse
-
-
Setup Omniverse viewport and camera for monoscopic rendering
setup_mono_view(camera: Union[str, pxr.Sdf.Path, pxr.Usd.Prim] = "/OpenXR/Cameras/camera", camera_properties: dict = {"focalLength": 10}) -> None
This method obtains the viewport window for the given camera. If the viewport window does not exist, a new one is created and the camera is set as active. If the given camera does not exist, a new camera is created with the same path and set to the recommended resolution of the display device
Parameters:
-
camera:
str
,pxr.Sdf.Path
orpxr.Usd.Prim
, optionalOmniverse camera prim or path (default: /OpenXR/Cameras/camera)
-
camera_properties:
dict
Dictionary containing the camera properties supported by the Omniverse kit to be set (default:
{"focalLength": 10}
)
-
-
Setup Omniverse viewports and cameras for stereoscopic rendering
setup_stereo_view(left_camera: Union[str, pxr.Sdf.Path, pxr.Usd.Prim] = "/OpenXR/Cameras/left_camera", right_camera: Union[str, pxr.Sdf.Path, pxr.Usd.Prim, None] = "/OpenXR/Cameras/right_camera", camera_properties: dict = {"focalLength": 10}) -> None
This method obtains the viewport window for each camera. If the viewport window does not exist, a new one is created and the camera is set as active. If the given cameras do not exist, new cameras are created with the same path and set to the recommended resolution of the display device
Parameters:
-
left_camera:
str
,pxr.Sdf.Path
orpxr.Usd.Prim
, optionalOmniverse left camera prim or path (default: /OpenXR/Cameras/left_camera)
-
right_camera:
str
,pxr.Sdf.Path
orpxr.Usd.Prim
, optionalOmniverse right camera prim or path (default: /OpenXR/Cameras/right_camera)
-
camera_properties:
dict
Dictionary containing the camera properties supported by the Omniverse kit to be set (default:
{"focalLength": 10}
)
-
-
Get the recommended resolution of the display device
get_recommended_resolutions() -> tuple
Returns:
-
tuple
Tuple containing the recommended resolutions (width, height) of each device view. If the tuple length is 2, index 0 represents the left eye and index 1 represents the right eye
-
-
Set the pose of the origin of the reference system
set_reference_system_pose(position: Union[pxr.Gf.Vec3d, None] = None, rotation: Union[pxr.Gf.Vec3d, None] = None) -> None
Parameters:
-
position:
pxr.Gf.Vec3d
orNone
, optionalCartesian position (in stage unit) (default:
None
) -
rotation:
pxr.Gf.Vec3d
orNone
, optionalRotation (in degress) on each axis (default:
None
)
-
-
Set the angle (in radians) of the rotation axes for stereoscopic view rectification
set_stereo_rectification(x: float = 0, y: float = 0, z: float = 0) -> None
Parameters:
-
x:
float
, optionalAngle (in radians) of the X-axis (default: 0)
-
y:
float
, optionalAngle (in radians) of the Y-axis (default: 0)
-
x:
float
, optionalAngle (in radians) of the Z-axis (default: 0)
-
-
Specify the meters per unit to be applied to transformations (default: 1.0)
set_meters_per_unit(meters_per_unit: float) -> None
Parameters:
-
meters_per_unit:
float
Meters per unit. E.g.: 1 meter is 1.0, 1 centimeter is 0.01
-
-
Specify the transformations to be applied to the rendered images
set_frame_transformations(fit: bool = False, flip: Union[int, tuple, None] = None) -> None
Parameters:
-
fit:
bool
, optionlAdjust each rendered image to the recommended resolution of the display device by cropping and scaling the image from its center (default:
False
). OpenCVresize
method withINTER_LINEAR
interpolation will be used to scale the image to the recommended resolution -
flip:
int
,tuple
orNone
, optionlFlip each image around vertical (0), horizontal (1), or both axes (0,1) (default:
None
)
-
-
Teleport the prim specified by the given transformation (position and rotation)
teleport_prim(prim: pxr.Usd.Prim, position: pxr.Gf.Vec3d, rotation: pxr.Gf.Quatd, reference_position: Union[pxr.Gf.Vec3d, None] = None, reference_rotation: Union[pxr.Gf.Vec3d, None] = None) -> None
Parameters:
-
prim:
pxr.Usd.Prim
Target prim
-
position:
pxr.Gf.Vec3d
Cartesian position (in stage unit) used to transform the prim
-
rotation:
pxr.Gf.Quatd
Rotation (as quaternion) used to transform the prim
-
reference_position:
pxr.Gf.Vec3d
orNone
, optionalCartesian position (in stage unit) used as reference system (default:
None
) -
reference_rotation:
pxr.Gf.Vec3d
orNone
, optionalRotation (in degress) on each axis used as reference system (default:
None
)
-
-
Subscribe a callback function to the render event
subscribe_render_event(callback=None) -> None
The callback function (a callable object) should have only the following 3 parameters:
-
num_views:
int
The number of views to render: mono (1), stereo (2)
-
views:
tuple
ofXrView
structureA XrView structure contains the view pose and projection state necessary to render a image. The length of the tuple corresponds to the number of views (if the tuple length is 2, index 0 represents the left eye and index 1 represents the right eye)
-
configuration_views:
tuple
ofXrViewConfigurationView
structureA XrViewConfigurationView structure specifies properties related to rendering of a view (e.g. the optimal width and height to be used when rendering the view). The length of the tuple corresponds to the number of views (if the tuple length is 2, index 0 represents the left eye and index 1 represents the right eye)
The callback function must call the
set_frames
function to pass to the selected graphics API the image or images to be renderedIf the callback is None, an internal callback will be used to render the views. This internal callback updates the pose of the cameras according to the specified reference system, gets the images from the previously configured viewports and invokes the
set_frames
function to render the views
Parameters:
-
callback: callable object (3 parameters) or
None
, optionalCallback invoked on each render event (default:
None
)
-
-
Pass to the selected graphics API the images to be rendered in the views
set_frames(configuration_views: list, left: numpy.ndarray, right: numpy.ndarray = None) -> bool
In the case of stereoscopic devices, the parameters left and right represent the left eye and right eye respectively. To pass an image to the graphic API of monoscopic devices only the parameter left should be used (the parameter right must be
None
)This function will apply to each image the transformations defined by the
set_frame_transformations
function if they were specifiedParameters:
-
configuration_views:
tuple
ofXrViewConfigurationView
structureA XrViewConfigurationView structure specifies properties related to rendering of a view (e.g. the optimal width and height to be used when rendering the view)
-
left:
numpy.ndarray
RGB or RGBA image (
numpy.uint8
) -
right:
numpy.ndarray
orNone
RGB or RGBA image (
numpy.uint8
)
Returns:
-
bool
True
if there is no error during the passing to the selected graphics API, otherwiseFalse
-
-
Form factors supported by OpenXR runtimes (
XrFormFactor
)-
XR_FORM_FACTOR_HEAD_MOUNTED_DISPLAY
= 1 -
XR_FORM_FACTOR_HANDHELD_DISPLAY
= 2
-
-
Environment blend mode (
XrEnvironmentBlendMode
)-
XR_ENVIRONMENT_BLEND_MODE_OPAQUE
= 1 -
XR_ENVIRONMENT_BLEND_MODE_ADDITIVE
= 2 -
XR_ENVIRONMENT_BLEND_MODE_ALPHA_BLEND
= 3
-
-
Primary view configuration type (
XrViewConfigurationType
)-
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_MONO
= 1 -
XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO
= 2
-
-
Reference spaces (
XrReferenceSpaceType
)-
XR_REFERENCE_SPACE_TYPE_VIEW
= 1 -
XR_REFERENCE_SPACE_TYPE_LOCAL
= 2 -
XR_REFERENCE_SPACE_TYPE_STAGE
= 3
-
-
Action type (
XrActionType
)-
XR_ACTION_TYPE_BOOLEAN_INPUT
= 1 -
XR_ACTION_TYPE_FLOAT_INPUT
= 2 -
XR_ACTION_TYPE_VECTOR2F_INPUT
= 3 -
XR_ACTION_TYPE_POSE_INPUT
= 4 -
XR_ACTION_TYPE_VIBRATION_OUTPUT
= 100
-
-
Graphics API extension names
-
XR_KHR_OPENGL_ENABLE_EXTENSION_NAME
= "XR_KHR_opengl_enable" -
XR_KHR_OPENGL_ES_ENABLE_EXTENSION_NAME
= "XR_KHR_opengl_es_enable" -
XR_KHR_VULKAN_ENABLE_EXTENSION_NAME
= "XR_KHR_vulkan_enable" -
XR_KHR_D3D11_ENABLE_EXTENSION_NAME
= "XR_KHR_D3D11_enable" -
XR_KHR_D3D12_ENABLE_EXTENSION_NAME
= "XR_KHR_D3D12_enable"
-
-
Useful constants for applying haptic feedback
-
XR_NO_DURATION
= 0 -
XR_INFINITE_DURATION
= 0x7fffffffffffffff -
XR_MIN_HAPTIC_DURATION
= -1 -
XR_FREQUENCY_UNSPECIFIED
= 0
-