Views: 2495
Adding Cluster Nodes
At this point, we have a cluster with a single manager node – a cluster, albeit meagre in substance.
Before we augment the cluster with additional nodes, we need to attend to an AWS configuration detail. Docker Machine will have created an AWS security group (in the absence of our specifying the use of an existing security group), called
docker-machine
. It’s created with ingress configured on ports22/tcp
(for SSH) and2376/tcp
(for remote Docker client and server communication). This is insufficient for a Swarm cluster, and we need to allow ingress on ports2377/tcp
(cluster management communication),7946/tcp
and7946/udp
(intra-node communication),4789/tcp
and4789/udp
(overlay network traffic).We can make this configuration change through the AWS console, or via the AWS command line interface. If you are operating on a local Docker host, you can use a Docker container to perform this action (parsing of the JSON returned by the AWS CLI needs to be performed by the jq command line processor):
$ docker-machine env --unset $ alias aws='docker run -it --rm --env AWS_ACCESS_KEY_ID=<AWS Access Key ID> --env AWS_SECRET_ACCESS_KEY=<AWS Secret Access key> --env AWS_DEFAULT_REGION=<AWS Default Region> nbrown/aws-cli "$@"' $ AWS_SGID=$(aws ec2 describe-security-groups --filter "Name=group-name,Values=docker-machine" | jq -sr '.[].SecurityGroups[].GroupId') $ aws ec2 authorize-security-group-ingress --group-id $AWS_SGID --protocol tcp --port 2377 --source-group $AWS_SGID $ aws ec2 authorize-security-group-ingress --group-id $AWS_SGID --protocol tcp --port 7946 --source-group $AWS_SGID $ aws ec2 authorize-security-group-ingress --group-id $AWS_SGID --protocol udp --port 7946 --source-group $AWS_SGID $ aws ec2 authorize-security-group-ingress --group-id $AWS_SGID --protocol tcp --port 4789 --source-group $AWS_SGID $ aws ec2 authorize-security-group-ingress --group-id $AWS_SGID --protocol udp --port 4789 --source-group $AWS_SGID
Now that the security group is configured correctly, let’s add some worker nodes, by creating three new Docker hosts, and joining them to the cluster. First, we can use Docker Machine to create three new Docker hosts on AWS:
$ for i in {2..4}; do > docker-machine create --driver amazonec2 --amazonec2-region eu-west-2 node-0$i > done
Once this has completed, we should have four Docker hosts running on AWS, which we can check using the following command:
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS node-01 - amazonec2 Running tcp://52.56.227.180:2376 v17.03.1-ce node-02 - amazonec2 Running tcp://52.56.236.106:2376 v17.03.1-ce node-03 - amazonec2 Running tcp://52.56.239.85:2376 v17.03.1-ce node-04 - amazonec2 Running tcp://52.56.234.7:2376 v17.03.1-ce
When we created the cluster using the
docker swarm init
command, the output from the command provided instructions for joining a worker to the cluster. The joining command, which needs to be executed on the Docker host joining the cluster, isdocker swarm join
. It requires a token to be passed as an argument, as well as a TCP socket, which is where a cluster manager is serving the Swarm Mode API.When the cluster is created, two tokens are created, one for managers and one for workers. If we don’t have access to the original output from the
docker swarm init
command, we can query a manager node for the relevant token. To retrieve the worker token, along with details of how to join the cluster, execute the following commands:
$ eval $(docker-machine env node-01) $ docker system info –format ‘{{.Name}}’ node-01 The first command sets some environment variables necessary for the local Docker client to find the remote Docker host, and to configure TLS-encrypted communication, whilst the second queries the remote Docker daemon for the host’s name. All that is required to establish a Swarm cluster, is in-built into the Docker daemon running on the remote Docker host. Unlike other orchestration tools, Docker’s native orchestration c
Source: Bootstrapping a Docker Swarm Mode Cluster – Semaphore