[GDD] Orchestrating Containers (Part 3 of 5) 26 November 2018

Hello, hello.
 
 
Welcome back :)
 
 
Today’s lesson is short, but powerful.
 
 
You’re going to learn about how to run dependencies, like databases, and how to access them from your application.
 
 
Different orchestrators may handle this part differently. Don’t worry about that for now.
 
 
Let’s continue to focus on local development with our local orchestrator.
 
 
Say you wanted to run mongodb as part of your application’s “stack”.
 
To do so, all you need to do is find the image name in Docker Hub, and create new entry in the `services` section of the `docker-compose.yml` file.
 
 
Here’s a full example of what that looks like, with our service running in development mode:
 
 
version: '3.4'

networks:
  mongo:

services:
  my-container:
    image: node:10
    volumes:
      - .:/usr/src/svc
    working_dir: /usr/src/svc
    ports:
      - 3000:3000
    command: bash -c "npm i && npm run dev"
    environment:
      - MONGO_URL=mongodb://mymongodb:27017/inventory
    networks:
      - mongo
 
  mymongodb:
    image: mongo
    networks:
      - mongo
 
To enable communication between the two services, they need to be on the same network.
 
By adding the key `networks` to each services config, we can specify that they are on the same network, and thus, can communicate with each other.
 
In docker-compose, when you run a `stack` file, a default network is created. It's better to be explicit though, so I've also defined a network named "mongo" without changing any of it's default configuration.
 
Lastly, we added an environment variables section to my-container's definition as well. In it, we passed in the name of the mongodb service as it is defined below. Note that both are `mymongodb`
 
Most orchestrator’s have an internal DNS service that allows this to happen. When you ping `http://mymongodb`, Docker will look up where the service `mymongodb` is running, and resolve to that address if it is on the network.
 
This is known as “Service Discovery”.
 
Service Discovery used to be difficult. 
 
You used to need to run a distributed key value store called consul and keep track of where each container was scheduled. Fortunately, those days are past.
 
Now... you just make sure the containers are on the same network, and then you can access them by name. It almost feels too easy, and some people feel like they are missing something.
 
10 lines in total added to our config to run an entire database on a secure private network. Not bad, right?
 
Docker Swarm, the clustered version of Docker for production workloads, works in the same exact way.
 
Meaning it’s just as easy to run that same database in production.

Unfortunately, in it’s current state, if you do that, you WILL lose all the data.
 
Even locally as it is, every time you run your app you’d be starting with a blank database, which can be convenient when you want it, but starting each day with a db import isn’t really a great use of time.
 
There is however, a very easy fix for this.
 
We’ll talk about it soon!
 
Patrick "the Data Persister" Scott