Skip to content

Commit

Permalink
Update docker container docs
Browse files Browse the repository at this point in the history
  • Loading branch information
hamishwillee authored and LorenzMeier committed Dec 30, 2017
1 parent c63f820 commit a9149b5
Showing 1 changed file with 142 additions and 35 deletions.
177 changes: 142 additions & 35 deletions en/test_and_ci/docker.md
Original file line number Diff line number Diff line change
@@ -1,83 +1,190 @@
# PX4 Docker Containers

Docker containers are available that contain the complete PX4 development toolchain including Gazebo and ROS simulation:
Docker containers are provided for the complete [PX4 development toolchain](http://localhost:4000/en/setup/dev_env.html#supported-targets) including NuttX and Linux based hardware, [Gazebo Simulation](../simulation/gazebo.md) and [ROS](../simulation/ros_interface.md).

* **px4io/px4-dev**: toolchain including simulation
* **px4io/px4-dev-ros**: toolchain including simulation and ROS (incl. MAVROS)

Pull one of the tagged images if you're after a container that just works, for instance `px4io/px4-dev-ros:v1.0`, the `latest` container is usually changing a lot.
This topic shows how to use the [available docker containers](#px4_containers) to access the build environment in a local Linux computer.

Dockerfiles and README can be found here: https://github.com/PX4/containers/tree/master/docker/px4-dev
> **Note** Dockerfiles and README can be found on [Github here](https://github.com/PX4/containers/tree/master/docker/px4-dev). They are built automatically on [Docker Hub](https://hub.docker.com/u/px4io/).
They are build automatically on Docker Hub: https://hub.docker.com/u/px4io/

## Prerequisites

Install Docker from here https://docs.docker.com/installation/, preferably use one of the Docker-maintained package repositories to get the latest version.
> **Note** PX4 containers are currently only supported on Linux (if you don't have Linux you can run the container [inside a virtual machine](#virtual_machine)). Do not use `boot2docker` with the default Linux image because it contains no X-Server.
Containers are currently only supported on Linux. If you don't have Linux you can run the container inside a virtual machine, see further down for more information. Do not use `boot2docker` with the default Linux image because it contains no X-Server.
[Install Docker](https://docs.docker.com/installation/) for your Linux computer, preferably using one of the Docker-maintained package repositories to get the latest stable version. You can use either the *Enterprise Edition* or (free) *Community Edition*.

## Use the Docker container
For local installation of non-production setups on *Ubuntu*, the quickest and easiest way to install Docker is to use the [convenience script](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-using-the-convenience-script) as shown below (alternative installation methods are found on the same page):

The following will run the Docker container including support for X forwarding which makes the simulation GUI available from inside the container. It also maps the directory `<local_src>` from your computer to `<container_src>` inside the container and forwards the UDP port needed to connect QGC. Please see the Docker docs for more information on volume and network port mapping.
```sh
curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh
```

The default installation requires that you invoke *Docker* as the root user (i.e. using `sudo`). If you would like to [use Docker as a non-root user](https://docs.docker.com/engine/installation/linux/linux-postinstall/#manage-docker-as-a-non-root-user), you can optionally add the user to the "docker" group and then log out/in:
```sh
# Create docker group (may not be required)
sudo groupadd docker
# Add your user to the docker group.
sudo usermod -aG docker $USER
# Log in/out again before using docker!
```


## Container Hierarchy {#px4_containers}

The available containers are listed below (from [Github](https://github.com/PX4/containers/blob/master/docker/px4-dev/README.md#container-hierarchy)):

Container | Description
---|---
px4-dev-base | Base setup common to all containers
&emsp;px4-dev-nuttx | NuttX toolchain
&emsp;&emsp;px4-dev-simulation | NuttX toolchain + simulation (jMAVSim, Gazebo)
&emsp;&emsp;&emsp;px4-dev-ros | NuttX toolchain, simulation + ROS (incl. MAVROS)
&emsp;px4-dev-raspi | Raspberry Pi toolchain
&emsp;px4-dev-snapdragon | Qualcomm Snapdragon Flight toolchain
&emsp;px4-dev-clang | Clang tools
&emsp;&emsp;px4-dev-nuttx-clang | Clang and NuttX tools

With the `-–privileged` option it will automatically have access to the devices on your host (e.g. a joystick and GPU). If you connect/disconnect a device you have to restart the container.

The most recent version can be accessed using the `latest` tag: `px4io/px4-dev-ros:latest` (available tags are listed for each container on *hub.docker.com*. For example, the *px4-dev-ros* tags can be found [here](https://hub.docker.com/r/px4io/px4-dev-ros/tags/)).

> **Tip** Typically you should use a recent container, but not necessarily the latest (as this changes too often).

## Use the Docker Container

The following instructions show how to build PX4 source code on the host computer using a toolchain running in a docker container. The information assumes that you have already downloaded the PX4 source code to **src/Firmware**, as shown:

```sh
mkdir src
cd src
git clone https://github.com/PX4/Firmware.git
cd Firmware
```

### Helper Script (docker_run.sh)

The easiest way to use the containers is via the [docker_run.sh](https://github.com/PX4/Firmware/blob/master/Tools/docker_run.sh) helper script. This script takes a PX4 build command as an argument (e.g. `make tests`). It starts up docker with a recent version (hard coded) of the appropriate container and sensible environment settings.

For example, to build SITL you would call (from within the **/Firmware** directory):

```sh
sudo ./Tools/docker_run.sh 'make posix_sitl_default'
```
Or to start a bash session using the NuttX toolchain:
```
sudo ./Tools/docker_run.sh 'bash'
```

> **Tip** The script is easy because you don't need to know anything much about *Docker* or think about what container to use. However it is not particularly robust! The manual approach discussed in the [section below](#manual_start) is more flexible and should be used if you have any problems with the script.

### Calling Docker Manually {#manual_start}

The syntax of a typical command is shown below. This runs a Docker container that has support for X forwarding (makes the simulation GUI available from inside the container). It maps the directory `<host_src>` from your computer to `<container_src>` inside the container and forwards the UDP port needed to connect *QGroundControl*. With the `-–privileged` option it will automatically have access to the devices on your host (e.g. a joystick and GPU). If you connect/disconnect a device you have to restart the container.

```sh
# enable access to xhost from the container
xhost +

# Run docker
docker run -it --privileged \
-v <local_src>:<container_src>:rw \
--env=LOCAL_USER_ID="$(id -u)" \
-v <host_src>:<container_src>:rw \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-e DISPLAY=:0 \
-p 14556:14556/udp \
--name=container_name px4io/px4-dev bash
--name=<local_container_name> <container>:<tag> <build_command>
```
Where,
* `<host_src>`: The host computer directory to be mapped to `<container_src>` in the container. This should normally be the **Firmware** directory.
* `<container_src>`: The location of the shared (source) directory when inside the container.
* `<local_container_name>`: A name for the docker container being created. This can later be used if we need to reference the container again.
* `<container>:<tag>`: The container with version tag to start - e.g.: `px4io/px4-dev-ros:2017-10-23`.
* `<build_command>`: The command to invoke on the new container. E.g. `bash` is used to open a bash shell in the container.

The concrete example below shows how to open a bash shell and share the directory **~/src/Firmware** on the host computer.
```sh
# enable access to xhost from the container
xhost +

# Run docker and open bash shell
sudo docker run -it --privileged \
--env=LOCAL_USER_ID="$(id -u)" \
-v ~/src/Firmware:/src/firmware/:rw \
-v /tmp/.X11-unix:/tmp/.X11-unix:ro \
-e DISPLAY=:0 \
-p 14556:14556/udp \
--name=mycontainer px4io/px4-dev-ros:2017-10-23 bash
```

If everything went well you should be in a new bash shell now. Verify if everything works by running SITL for example:
If everything went well you should be in a new bash shell now. Verify if everything works by running, for example, SITL:

```sh
cd <container_src>
cd src/firmware #This is <container_src>
make posix_sitl_default gazebo
```

### Graphics driver issues

It's possible that running Gazebo will result in a similar error message like the following:
### Re-enter the Container

The `docker run` command can only be used to create a new container. To get back into this container (which will retain your changes) simply do:

```sh
libGL error: failed to load driver: swrast
# start the container
sudo docker start container_name
# open a new bash shell in this container
sudo docker exec -it container_name bash
```

In that case the native graphics driver for your host system must be installed. Download the right driver and install it inside the container. For Nvidia drivers the following command should be used (otherwise the installer will see the loaded modules from the host and refuse to proceed):
If you need multiple shells connected to the container, just open a new shell and execute that last command again.

### Clearing the Container

Sometimes you may need to clear a container altogether. You can do so using its name:
```sh
./NVIDIA-DRIVER.run -a -N --ui=none --no-kernel-module
$ sudo docker rm mycontainer
```
If you can't remember the name, then you can list inactive container ids and then delete them, as shown below:
```sh
$ sudo docker ps -a -q
45eeb98f1dd9
$ sudo docker rm 45eeb98f1dd9
```

### QGroundControl

When running a simulation instance e.g. SITL inside the docker container and controlling it via *QGroundControl* from the host, the communication link has to be set up manually. The autoconnect feature of *QGroundControl* does not work here.

In *QGroundControl*, navigate to [Settings](https://docs.qgroundcontrol.com/en/SettingsView/SettingsView.html) and select Comm Links. Create a new link that uses the UDP protocol. The port depends on the used [configuration](https://github.com/PX4/Firmware/tree/master/posix-configs/SITL) e.g. port 14557 for the SITL iris config. The IP address is the one of your docker container, usually 172.17.0.1/16 when using the default network.

### Troubleshooting

#### Permission Errors

More information on this can be found here: http://gernotklingler.com/blog/howto-get-hardware-accelerated-opengl-support-docker/
The container creates files as needed with a default user - typically "root". This can lead to permission errors where the user on the host computer is not able to access files created by the container.

### Re-enter the container
The example above uses the line `--env=LOCAL_USER_ID="$(id -u)"` to create a user in the container with the same UID as the user on the host. This ensures that all files created within the container will be accessible on the host.

If you exit the container, your changes are left in this container. The above “docker run” command can only be used to create a new container. To get back into this container simply do:

#### Graphics Driver Issues

It's possible that running Gazebo will result in a similar error message like the following:

```sh
# start the container
sudo docker start container_name
# open a new bash shell in this container
sudo docker exec -it container_name bash
libGL error: failed to load driver: swrast
```

If you need multiple shells connected to the container, just open a new shell and execute that last command again.
In that case the native graphics driver for your host system must be installed. Download the right driver and install it inside the container. For Nvidia drivers the following command should be used (otherwise the installer will see the loaded modules from the host and refuse to proceed):

### QGroundControl
```sh
./NVIDIA-DRIVER.run -a -N --ui=none --no-kernel-module
```

When running a simulation instance e.g. SITL inside the docker container and controlling it via QGroundControl from the host, the communication link has to be set up manually. The autoconnect feature of QGroundControl does not work here.
More information on this can be found [here](http://gernotklingler.com/blog/howto-get-hardware-accelerated-opengl-support-docker/).

In QGroundControl, navigate to [Settings](https://docs.qgroundcontrol.com/en/SettingsView/SettingsView.html) and select Comm Links. Create a new link that uses the UDP protocol. The port depends on the used [configuration](https://github.com/PX4/Firmware/tree/master/posix-configs/SITL) e.g. port 14557 for the SITL iris config. The IP address is the one of your docker container, usually 172.17.0.1/16 when using the default network.

## Virtual machine support
## Virtual Machine Support {#virtual_machine}

Any recent Linux distribution should work.

Expand All @@ -93,7 +200,7 @@ Use at least 4GB memory for the virtual machine.

If compilation fails with errors like this:

```
```sh
The bug is not reproducible, so it is likely a hardware or OS problem.
c++: internal compiler error: Killed (program cc1plus)
```
Expand All @@ -104,7 +211,7 @@ Try disabling parallel builds.

Edit `/etc/defaults/docker` and add this line:

```
```sh
DOCKER_OPTS="${DOCKER_OPTS} -H unix:///var/run/docker.sock -H 0.0.0.0:2375"
```

Expand Down

0 comments on commit a9149b5

Please sign in to comment.