Skip to content

Remote Rendering with PBRT v4

David Cardinal edited this page Dec 5, 2023 · 72 revisions

This page explains how to configure ISET3d (V4) for remote execution. The reason for remote execution is to provide access to faster hardware (e.g., Nvidia GPUs) on the remote machine. We refer to the user's machine as Local and the remote machine as Remote.

Overview

Basic rendering with ISET3d begins by reading a base PBRT scene file into a Matlab recipe. We then edit the recipe and then write out the new PBRT scene into ISET3d/local (using piWrite() or piWRS(). When running on a local machine, say called Local, a Docker container mounts the local data and executes PBRT on the local directory.

Since the release of PBRT V4, we frequently render remotely on machines with installed Nvidia GPUs. To render remotely, say on a machine called Remote, ISET3d's dockerWrapper class rsync's (copies) the files on Local to the user's matched account on Remote. We then invoke a Docker image on Remote that runs PBRT (which image depends on the GPU architecture). Finally, we invoke rsync again to return the rendered file (an EXR) to Local. We control the remote rendering - including rsync and the Docker containers - through the parameters in an ISET3d class, @dockerWrapper.

The remote rendering method requires that several permissions be established.

  • The user must have an account on Remote,
  • The user needs key-validated ssh access from Local to Remote (see how-to below), that means that you can ssh in without needing a password. If you have to type your password, then key-based security isn't working, and docker won't either
  • The user needs to be part of the docker group on Remote,
  • The @dockerWrapper method parameters must specify the user information and PBRT Docker images on Remote.
  • You need a docker render context ('remote-mux') for students. (see how-to below to create)

Finally, we (Dave Cardinal) has recently implemented a version of the remote rendering that assumes that all of the key rendering files (meshes, skymaps, spds, lenses) are already stored on Remote; only the specific scene files are generated locally. This speeds rendering for complex scenes because the auxiliary files do not need to be sync'd over the network. Many of our scenes contain a few thousand assets, so rendering with 'remote resources' is efficient.

Rendering remotely

At Vistalab, we render remotely on either 'mux' or 'orange'. They share the same resource files that they mount from our data store. In this section, we assume that you have accounts on one or both of those systems. The account is set up to have a directory tree that looks like this:

/home/<your_username_here>/iset/iset3d-v4/local if you are a researcher, or

/home/student/<your_username_here>/iset/iset3d-v4/local if you are a student in a class.

NOTE: in case that folder tree doesn't get created, you may need to use ssh to login to Remote, and use mkdir to create it yourself.

The parameters that specify how your renders work are stored in your Matlab prefs under 'docker'. We explain those next.

Rendering parameters

Individuals store default parameters for remote rendering in a Matlab preference variable named 'docker'. The saved parameters answer the questions about where and how to render. An example of the rendering parameters for one user is:

>> getpref('docker')

ans = 

  struct with fields:

       gpuRendering: 1
        localRender: 0
    remoteResources: 1
      renderContext: 'remote-mux'
      remoteMachine: 'mux.stanford.edu'
        remoteImage: 'digitalprodev/pbrt-v4-gpu-ampere-mux'
           whichGPU: 0
          verbosity: 1
     defaultContext: 'default'
         localImage: ''
         remoteUser: '<your_username_here>'
         remoteRoot: '/home/<your_username_here>' or '/home/student/<your_username_here>'
    localVolumePath: <only if needed>
          localRoot: ''

Then, you can create and render a sample recipe like this:

thisR = piRecipeCreate('Chess Set');           % One of our default recipes
scene = piWRS(thisR,'remote resources',true);  % Write, Render, and Show the scene

The parameters include the name of the machine, the user's account, and the user's home directory. Which GPU is used, which Docker container, and other parameters can also be set.

The rendering function piRender(), called by piWRS(), creates a @dockerWrapper that is initialized by reading these preferences. The dockerWrapper class builds the strings used to run the appropriate Docker image on Remote, and on a specific GPU.

At Vistalab we configure the preferences for 'mux' or 'orange' from the dockerWrapper.preset function. If you alter the dockerWrapper variables, you can also save these with the dockerWrapper.prefsave function. For example

thisD = dockerWrapper;                % Create an instance of a Docker wrapper
thisD.preset('remotemux');                  % Configure parameters to render on 'mux'
piWRS(thisR,'docker wrapper',thisD);  % Renders on 'mux'
thisD.prefsave;                       % Save the mux configuration as Matlab prefs
thisD.preset('mux');                  % Configure to run on 'mux'
piWRS(thisR,'docker wrapper',thisD);  % Will render on 'mux'
piWRS(thisR);                         % Will render based on saved prefs ('orange')

Communicating with the remote computer: Setting up the ssh key

On your local computer, you need to be able to issue the command

ssh <username>@<remote-machine-name-here>

and get logged in to without being asked for a password. ISET3d uses the ssh to connect you to the server for rsync of data files and for running the Docker with PBRT.

Many people who connect using ssh already have the first element of this - just a key. In that case you don't need to do this step. But if not, then you should generate a key.

In this example we assume you do not have a key on your local computer, and we invoke ssh-keygen. Important: we do not to overwrite what might already be there. Also important: if you are asked for a password, just hit enter and DO NOT enter a password. If you enter a password, you will defeat the purpose.

wandell % ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (<user home directory>/.ssh/id_rsa): 

If you get this message, just say no to over-writing:
<user home directory>/.ssh/id_rsa already exists.
Overwrite (y/n)? n

Then we need to copy the key for the machine we use.

wandell % ssh-copy-id <username>@<remote-machine-name-here>

Note: If you re-invoke ssh-keygen on your computer, you will need to re-do the ssh-copy-id for your remote machine.

Windows Note: There isn't a simple ssh-copy-id on Windows, so something like this might be needed:

cat ~/.ssh/id_rsa.pub | ssh [email protected] "cat >> ~/.ssh/authorized_keys" 

where user is your username (sometimes "root", or whatever you may have set up), and replace 123.45.67.89 with your machine / host / VPS's IP address.

If the directory .ssh is not yet created on the host machine, use this small variation:

cat ~/.ssh/id_rsa.pub | ssh [email protected] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys"

Requirements on the remote machine

  • You will need an account (with a password) on the remote machine. Ask Doug/Dave/Brian for the machines we use at Vistalab.
  • You will need to have a directory tree set up for ISET3d in your account on the remote machine
    • The path to that directory is stored in Matlab prefs
    • The directories need to be writable and owned by the user

Using Docker for rendering

Docker Context

A docker context defines the connection parameters and authentication credentials for a Docker host. It is a way to manage multiple Docker environments from a single terminal session. By creating and switching between contexts, we can work with different Docker environments (at vistalab, for example, on mux and orange).

The @dockerWrapper method creates a default rendering context for each remote machine. If you have a context failure when rendering, this can create the appropriate context using these commands:

thisD = dockerWrapper;
thisD.getRenderContext('mux');  % or orange if you have access.

Resetting the dockerWrapper

Sometimes docker communications get tangled and you need to reset. You can use the class-level command to clean up the current remote docker context.

dockerWrapper.reset();

Docker on Windows note

If you don't have wsl installed already, Docker installs a "mini" version of Linux, that doesn't include the commands needed by ISET3d. In that case you'll need to manually install a version of Ubuntu (trivial, from the Windows Store) and set the default wsl distro to Ubuntu.

Useful commands

To get list of running containers:

docker [--context <context-name>] ps

To see what's up on your Nvidia GPUs:

nvidia-smi

To keep track of things on a remote server:

ssh <remote-user>:<remote-server> nvidia-smi -l

Once you've created a dockerWrapper object, you can use the gpuStatus() method to check the status of the remote GPU:

% For Example:
thisD = dockerWrapper;
[status, result] = thisD.gpuStatus();

Docker images for pbrt with GPU support:

We have built only a few Docker images with PBRT GPU support. These are:

  • camerasimulation/pbrt-v4-gpu-t4 (for Zhenyi's cloud machine)
  • digitalprodev/pbrt-v4-gpu-ampere-bg (for the UPenn machine)
  • digitalprodev/pbrt-v4-gpu-mux (for the Vistalab machine)

Usage Notes:

Also, while more than one client can render on a GPU at the same time, Nvidia GPUs don't page out memory for the non-running containers. So the total size (in GPU RAM) of the scenes being rendered can't exceed the size of the GPU's RAM. The nvidia-smi command will give you information about what is currently loaded and how much RAM it consumes.

Clone this wiki locally