Skip to content

Deploy a Docker Swarm

Roberto Quintanilla edited this page Mar 2, 2017 · 5 revisions

Deploy a Docker Swarm

Why Swarm over Rancher?

Rancher was our preferred cluster administration tool on 2015 & early 2016. Since mid-2016, Swarm comes now by default with the Docker Engine (since version 1.12). It has scheduling, routing mesh (by port) and health check etc. It is far easier to setup and with minimal requirements over a more complex setup with Rancher. Besides, we're shedding off the weight of "multiple-vendor complexity" for this task.

However, if our client really needs deployment to their own infrastructure (servers, networks, etc), AND commercial support is required, we MUST recommend Docker Data Center, which includes the Universal Control Plane, an application that takes care of the cluster with several enterprise features. DISCLAIMER: We're the first Docker partner here in México.

1: Enabling a Docker Swarm Cluster

1.1: Deploy and Secure the Manager nodes

Although you can launch services and stacks from any of the swarm manager nodes, it is more useful to connect your host's docker client - where you have all of your compose files & other configs - to any of the remote swarm managers.

For production environments, it is highly recommended to deploy at least 3 manager nodes, so in case that one manager should fail, the cluster won't go down.

You will find the instructions on how to Deploy & Secure a Remote Docker Engine here: Deploy and Secure a Remote Docker Engine.

1.2: Create the Swarm:

Fairly simple: Run this on the first manager:

docker swarm init

If you have deployed more managers, be sure to get a manager join token from the first manager:

docker swarm join-token manager

Copy / Take note of the command. Then, on each of the remaining managers, run the swarm join command:

docker swarm join \
    --token SWMTKN-1-1a985nwXXXXyXXXXmestin8okfny6hggopxc5e1vp6znej0ipo-7b3got0htyk6x0ot0h83u5eqp \
    10.13.1.4:2377

1.3: Deploy the Worker Nodes

Next, we need to deploy the remote engines that will actually run the user processes/containers - in contrast, it's recommended to keep the manager nodes from running user processes/containers, as a low memory condition would result in cluster data corruption).

You may also follow the guide on how to Deploy and Secure a Remote Docker Engine, but ignore the "secure the engine" part, as these engines should not be accessible from the outside.

1.4: Add the Worker Nodes to the Swarm

The final step is to join each worker node into the swarm. You may get the join token for workers at any of the swarm managers:

# NOTE: We're getting a 'worker' token instead of 'manager' token:
docker swarm join-token worker

Now, on each of the worker nodes, run the command you obtained on the manager:

docker swarm join \
    --token SWMTKN-1-1a98XXXXXXX4umestin8okfny6hggopxc5eXXXXXXXX0ipo-5kaf0mvxjm4uuh100sqevzsm0 \
    10.13.1.4:2377

1.5: Check the swarm nodes:

You'll need to point your Docker client to any of the cluster managers (or the manager load balancer). See details at the end of Deploy and Secure a Remote Docker Engine on how to do this.

Once your'e pointing to the cluster manager:

docker node ls

You should see a list similar to this:

ID                           HOSTNAME         STATUS  AVAILABILITY  MANAGER STATUS
n5py8atqdi15xps0qrbm9m5bw    demo-worker-01   Ready   Active        
r0pso3flixeez1o78riimiodg *  demo-manager-01  Ready   Active        Leader
xhjjqq7zhz32cy1q0olcopij4    demo-data-01     Ready   Active        
yssymyqcam5t6czmzx11nakl3    demo-public-01   Ready   Active        

Be sure your'e seeing all the nodes you deployed.

2: Deploy your apps to the Swarm

Still pointing your Docker client to the Swarm Manager, run:

docker stack deploy --compose-file YOUR_DOCKER_COMPOSE_FILE.yml STACK_NAME
Clone this wiki locally