Patrick Scott [GDD] Clusters (Part 2 of 2) 02 December 2018

In the last lesson we ran our first cluster. 
It was made up of VirtualBox VMs, and it didn’t really do anything at all, except exist.Yay us!

Today we I’ll show you how to deploy whole stacks to Docker Swarm.
Let’s jump right in.
Let’s start by spinning up our small swarm mode cluster again. Start the command, and then continue on below - you can let the cluster start up in the background.
cd scripts
git pull origin master
eval $(docker-machine env node-1)
I want you to think of your cluster as a separate entity - a cluster. 
Docker is configured to run in “swarm mode”, which gives it access to special commands for operating a cluster. Or more succinctly, Docker Swarm. It has a separate IP address we'll need to connect to.

When we were using Docker For Mac earlier, we would just run commands without much thought to where they were running. We didn’t need to.
Docker For Mac makes running on Mac transparent and simple. When we are running on our Mac, we can easily choose a container as an environment, mount our directory in, and start coding.
When we are running a cluster, however, our code is not on the remote server, so we can’t just mount it in. We need to get it there first.
Also, the docker-compose file we’ve been running for development mode is not appropriate for a production build running in our cluster as it expects our code to be available to mount in.
We need a new config for this purpose.
In Docker Swarm, these configs are known as “stacks”, and we can deploy them using the command:
docker stack deploy
For this reason, as a convention, in each project, I generally create (at least) two stack files.
`docker-compose.yml` is the default file name that docker-compose expects, so I use that to store my development stack configuration, as it's generally what you want to do when you open up your IDE.
For production, I create another file `stack.yml`.
So you're basically a pro already!
You can name it whatever you want, but as you’ll see in the continuous deployment setup portion, it’s convenient for scripting purposes to just use the same one every time.
There are a few things that need to be configured differently when running our cluster in “swarm mode”.
1. *Networks* - More care should be taken with network planning and provisioning, externally to the stack.
2. *Volumes* - We need to provide a plugin that interacts with the infrastructure provider to provision volumes that meet our specifications.
3. *No access to your machine* - This means you can’t do things such as mount in your source code like you would for development, or even save an image locally on your machine. Images will need to be pre-built and stored to a repository accessible by your cluster.
4. *Resource Config* - To run a container most effectively, you need to tell give the orchestrator hints of how much Memory and CPU should be reserved for it.
With that in mind, let’s talk about networks.
The Docker tool has a command for networks - let’s run `docker network` to see the available network commands.
➜ docker network
Usage: docker network COMMAND
Manage networks
  connect Connect a container to a network
  create Create a network
  disconnect Disconnect a container from a network
  inspect Display detailed information on one or more networks
  ls List networks
  prune Remove all unused networks
  rm Remove one or more networks
Run 'docker network COMMAND --help' for more information on a command.
In production we’ll probably want to run a separate `proxy` stack - so let’s start by creating a `proxy.yml` file that defines that.

Because we’ll want other stacks to access the `proxy` network in order to access the actual proxy, we will elect to create a network named `proxy` externally to the stack.
➜ docker network create --driver=overlay proxy
NOTE: These commands only work in Swarm mode, which should be up and running by now from the script we ran at the beginning of the lesson.
Let’s define a new file `proxy.yml` that will be our production proxy configuration.

While we are at it - I’m also gonna show you a new proxy, called Docker Flow Proxy (DFP).
It’s similar to NGINX, but has additional features built specifically for Docker Swarm.
DFP was made by my DevOps mentor who spent many hours answering my many questions, Viktor Farcic. He's awesome.
Let's check it out:
version: "3.4"
    external: true
    image: vfarcic/docker-flow-proxy:${TAG:-18.04.06-12}
      - 80:80
      - proxy
      - LISTENER_ADDRESS=swarm-listener
          - node.role == worker
    image: vfarcic/docker-flow-swarm-listener:${DFPSL:-18.04.12-7}
      - proxy
      - /var/run/docker.sock:/var/run/docker.sock
      - DF_NOTIFY_CREATE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/reconfigure
      - DF_NOTIFY_REMOVE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/remove
          - node.role == manager
Ok, a few new things to unpack here. First, the network, similar to before, but this time we simply specify "external: true"

Also you may have noticed we didn’t just run one service. We ran two.
What is this new swarm-listener service?
What gives DFP most of it’s power is actually it’s companion service - Docker Flow Swarm Listener. It listens to your cluster, and when something of interest to DFP occurs, DFP is notified.
In fact, DFP itself is only a thin wrapper around battle-tested, production-ready HAProxy (open source free nginx competitor), used by some of the world’s largest internet companies.
This allows you do some really awesome stuff.
Specifically, for DFP, it allows you to define the proxy configuration for each service with that particular service’s config. I’ll show you that tomorrow, we have enough to unpack for today.
In the DFSL config, we also are for the first time making use of the Docker Swarm API by running DFSL on a manager node, specifically.
On the manager node there is a socket file (everything in unix is a file, kinda like everything in JS is an object) which has Swarm events. In order to access those events we need to mount that file into our container.
That’s what the volume mount `/var/run/docker.sock:/var/run/docker.sock` is doing.
In both services, there are references to each other through the service name DNS resolution of Docker Swarm (we saw this earlier in docker-compose as well), and a new configuration section `deploy`.
`deploy` allows you to configure options that are only relevant in a deployment scenario. We generally want to create a constraint for most services to run on worker nodes. Generally, the only exception is a requirement of the Docker Swarm .sock file. For this we need to run on a Manager node, as Workers do not have access to this.
So finally, we should be able to deploy this file. Let’s do it!
# make sure env is configured before running!
eval $(docker-machine env node-1)
And deploy:
docker stack deploy -c proxy.yml proxy
Creating service proxy_proxy
Creating service proxy_swarm-listener
To check the status of our stack, we can use the `docker stack` command `ps`
docker stack ps proxy
pu4ha5vqs3me proxy_swarm-listener.1 vfarcic/docker-flow-swarm-listener:18.04.12-7 node-1 Running Running less than a second ago
mohwynjqiax7 proxy_proxy.1 vfarcic/docker-flow-proxy:18.04.06-12 node-2 Running Running less than a second ago
Let’s check out our proxy - we can use the IP address of any manager node to access the swarm. Because port 80 is the default port, we don’t need to specify a port.
➜ open "http://$(docker-machine ip node-1)"
You should see a simple 503 error, simply meaning that nothing has been configured. There are no routes for the proxy to direct you to. This is the proxy though, so congratulations!

We’ll have to configure our service with information for the proxy, but we’ll leave that for later.
For now, let’s undo what we’ve created, and put our swarm to sleep.
docker stack rm proxy
Removing service proxy_proxy
Removing service proxy_swarm-listener
WARNING: This will stop all running docker-machine's!
Below are your running machines:
node-1 * virtualbox Running tcp:// v18.03.1-ce
node-2 - virtualbox Running tcp:// v18.03.1-ce
Are you sure you wish to continue? [y/n] y
Stopping "node-2"...
Stopping "node-1"...
Machine "node-2" was stopped.
Machine "node-1" was stopped.
Until next time.
Patrick “Stack Deployer” Scott