-
Notifications
You must be signed in to change notification settings - Fork 45
Deploy and Secure a Remote Docker Engine
This guide assumes that the engine is going to be installed on a Debian Jessie machine, which uses systemd to start the docker daemon. This part of the guide is based on:
Docker Machine is GREAT for starting up single, secure remote Docker hosts on AWS, Azure, Digital Ocean and many others if you'll going to be the only user to use these, from a single computer. If this is your case - or you want to learn something cool - don't hurt yourself. Be gone and use Docker Machine instead.
For those of you still here: Docker Machine creates those instances on the provider (AWS/Azure, etc)
running the engine setup via SSH, using an auto-generated RSA private/public key (That's how/why
they provide the docker-machine ssh [machine-name]
command), and configures the Docker Engine with
auto-generated TLS certificates. All of these are stored somewhere on your machine. These configs
are troublesome to transfer/restore (See
related
issues). Plus, these are created on a one-by-one
basis, making using the same keys/certs for machines on a same "group" is rather impossible.
I've found out it's quite easier to spin out one of these machines "by hand", create a VM image, and then replicate it N times as needed.
That's described completely over the Docker Documentation. I prefer to install it over a Debian distribution, which is what this guide assumes.
The goals are:
- Authenticate the server with TLS, so we can be certain that we're communicating with the correct server, and not to an malicious impersonation.
- Authenticate the client with TLS, so only clients using the client certificates can use the engine.
Follow the Instructions on how to generate the server & client certificates at Docker website. We'll just generate the certificates, but deviate a little bit on how to configure them.
Next, we'll need to place the ca.pem
, server.pem
and server-key.pem
files into the remote server's /etc/docker
folder. If you generated the certificates on your laptop, you might need to copy those to the remote server via scp
.
We're assuming that access to the server at port TCP 2376 is available. You'll need to configure your server's firewall / network security to allow communication over this port.
Since Docker version 17.03.0, we can use a platform-independent configuration file. We'll need to make some minor workarounds in order to be able to configure the sockets (unix & tcp) on this file.
Originally, we'd create a new file at /etc/systemd/system/docker.service
that would override settings on the /lib/systemd/system/docker.service
file... but now we'll just edit the /lib/systemd/system/docker.service
directly, removing the -H fd://
fragment at the [Service] ExecStart
line:
# At /lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
Create and/or edit the /etc/docker/daemon.json
file, ensuring that:
- The Docker daemon can be accessed locally at the
unix:///var/run/docker.sock
file. - The Docker daemon can also be accessed from outside at the
tcp://0.0.0.0:2376
port binding. - The Docker daemon has some custom labels assigned to it. (Not necessary, but useful)
- The Docker daemon has TLS enabled.
- The paths for the certificate authority, server certificate and server key are all configured.
- The Docker daemon verifies a client certificate for each incoming connection.
You can also see all the available options at Docker documentation.
{
"hosts": [
"unix:///var/run/docker.sock",
"tcp://0.0.0.0:2376"
],
"labels": [
"is-our-remote-engine=true",
"provider=azure"
],
"tls": true,
"tlscacert": "/etc/docker/ca.pem",
"tlscert": "/etc/docker/server.pem",
"tlskey": "/etc/docker/server-key.pem",
"tlsverify": true
}
Finally, we'll need to flush the configuration changes onto systemd, and restart the docker daemon service:
sudo systemctl daemon-reload && sudo systemctl restart docker
You might want to check the status of the service:
sudo systemctl status -l docker.service
If everything is green, your Docker engine is securely available and ready to use from your laptop... we'll just need to point your host to the remote engine:
Pointing your docker client to the remote engine involves setting some environment variables:
-
DOCKER_HOST
: The remote engine URI. -
DOCKER_CERT_PATH
: The path where theca.pem
,cert.pem
andkey.pem
are located on your machine. -
DOCKER_TLS_VERIFY
: Enabled the TLS client verification - which is required by our remote Docker engine.
You might want to save this configuration in a file - possibly named env.sh
somewhere in your machine:
export DOCKER_HOST=tcp://[your-remote-server-address]:2376
export DOCKER_CERT_PATH=/Somewhere/on/your/machine
export DOCKER_TLS_VERIFY=1
Then, you can source these variables like this:
source env.sh
Now you should be ready to issue commands via docker
and docker-compose
. Test your connectivity to the
remote engine:
docker info
If you followed this guide completely, you should be able to see the is-our-remote-engine=true
label:
...
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Labels:
provider=azure
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false