Docker task host for linux.
Each task is evaluated in an restricted docker container. Docker has a bunch of awesome utilities for making this work well... Since the images are COW running any number of task hosts is plausible and we can manage their overall usage.
We manipulate the docker hosts through the use of the docker remote api
See the doc site
for how to use the worker from an existing worker-type
the docs here
are for hacking on the worker itself.
- Node (same version as the rest of Taskcluster)
- Docker
# from the root of this repo) also see --help
node bin/worker.js <config>
The defaults contains all configuration options for the docker worker in particular these are important:
-
rootUrl
the rootUrl of the taskcluster instance to run against -
taskcluster
the credentials needed to authenticate all pull jobs from taskcluster.
- src/main.js - primary entrypoint for worker
- src - source of internal worker apis
- src/task_listener.js - primary entrypoint of worker
- src/task.js - handler for individual tasks
- src/features/ - individual features for worker
docker-worker runs in an Ubuntu environment with various packages and kernel modules installed.
Within the root of the repo is a Vagrantfile and vagrant.sh script that simplifies creating a local environment that mimics the one uses in production. This environment allows one to not only run the worker tests but also to run images used in Taskcluster in an environment similar to production without needing to configure special things on the host.
The v4l2loopback and snd-aloop kernel modules are installed to allow loopback audio/video devices to be available within tasks that require them. For information on how to configure these modules like production, consult the vagrant script used for creating a local environment.
There are a few components that must be configured for the tests to work properly (e.g. docker, kernel modules, and other packages). A Vagrant environment is available to make this easy to use. Alternatively, it is possible to run tests outside of Vagrant. But this requires a bit more effort.
- Install VirtualBox
- Install Vagrant
- Install vagrant-reload by running
vagrant plugin install vagrant-reload
- Within the root of the repo, run
vagrant up
vagrant ssh
to enter the virtual machine
If you can't use Vagrant (e.g. you are using Hyper-V and can't use Virtualbox), it is possible to configure a bare virtual machine in a very similar manner to what Vagrant would produce.
- Create a new virtual machine.
- Download and boot an Ubuntu 14.04 server ISO
- Boot the VM
- Click through the Ubuntu installer dialogs
- For the primary username, use
vagrant
- All other settings can pretty much be the defaults. You'll just
press ENTER a bunch of times during the install wizard. Although
you'll probably want to install
OpenSSH server
on theSoftware selection
screen so you can SSH into your VM. - On first boot, run
sudo visudo
and modify the end of the%sudo
line so it containsNOPASSWD:ALL
instead of justALL
. This allows you tosudo
without typing a password. apt-get install git
git clone https://github.com/taskcluster/taskcluster ~/taskcluster
sudo ln -s /home/vagrant/taskcluster/workers/docker-worker /vagrant
sudo ln -s /home/vagrant/taskcluster/workers/docker-worker /worker
cd taskcluster/workers/docker-worker
./vagrant.sh
-- this will provision the VM by installing a bunch of packages and dependencies.sudo reboot
-- this is necessary to activate the updated kernel.sudo depmod
Many tests require the TASKCLUSTER_ROOT_URL
, TASKCLUSTER_ACCESS_TOKEN
, and TASKCLUSTER_CLIENT_ID
environment variables. These variables
define credentials used to connect to external services.
To obtain Taskcluster client credentials, run
eval $(cat scopes.txt | xargs taskcluster signin)
. This will open a web
browser and you'll be prompted to log into Taskcluster. This command requires
the taskcluster
cli. This can be downloaded for your OS/architecture (name taskcluster-<OS>-<ARCH>
) from the following page. Be sure to rename the download to taskcluster
(linux/darwin) or taskcluster.exe
(windows):
https://github.com/taskcluster/taskcluster/releases.
If using Vagrant, setting these environment variables in the shell used
to run vagrant ssh
will cause the variables to get inherited inside the
Vagrant VM. If not using Vagrant, you should add export VAR=value
lines
to /home/vagrant/.bash_profile.
From the virtual machine, you'll need to install some application-level dependencies:
cd /vagrant
./build.sh
-- builds some Docker imagesyarn install --frozen-lockfile
-- installs Node modules
Like most node projects, yarn test
will run the docker-worker tests.
In the default case, this will end up skipping most tests.
Most of the time, this is OK: if your change is covered by the tests that are not skipped, then it is fine to submit a PR without running the remainder of the tests.
Most tests are skipped because they require Docker.
If you have Docker installed, set DOCKER_TESTS=1
to run these tests: DOCKER_TESTS=1 yarn test
.
Note that the tests will be merciless with your Docker environment -- do not enable this if you have images or containers that you cannot afford to lose!
Other tests are disabled because they require Taskcluster credentials for the https://community-tc.services.mozilla.com/ deployment.
These credentials can be acquired, if you have the permission, by running TASKCLUSTER_ROOT_URL=https://community-tc.services.mozilla.com taskcluster signin --scope assume:project:taskcluster:docker-worker-tester --name d-w
.
This will set some environment variables that will be detected by the test suite.
Under most circumstances one only wants to run a single test suite.
For individual test files, run ./node_modules/mocha/bin/mocha --bail test/<file>
.
To run tests within a test file, add --grep <phrase>
when running the above command to capture just the individual test name.
*** Note: Sometimes things don't go as planned and tests will hang until they timeout. To get more insight into what went wrong, set "DEBUG=*" when running the tests to get more detailed output. ***
- Time synchronization : if you're running docker in a VM your VM may drift in time... This often results in stale warnings on the queue.