[GDD] Orchestrating Containers (Part 1 of 5) 20 November 2018

Today is an exciting day.

 
For today, we shall orchestrate!
 
 
First, we will use one container.
 
 
 
 
If you didn’t yet - do that now!
 
We will be using the Next.js application created as the result of this tutorial as an example:
 
SSR Split Testing and Analytics with React, Redux, and Next.js
 
 
---
 
 
Ok. Ready to proceed?
 
 
In the above article, I went through the process of “Dockerizing” an application. I also showed how to do some more advanced things like running "integration tests". Don't worry about the more advanced stuff just yet.
 
 
In the last email I asked you to do the same for an example application I provided which uses next.js.
 
 
As I said, it’s pretty similar to the one I provided in the article.
 
 
Here’s the Dockerfile required (also on Github here: 
 
FROM node:10-alpine AS build

 
# install gyp tools
# if you have an npm dependency that depends on native code
# like the redis package, you'll need these tools to compile
# that dependency
RUN apk add --update --no-cache \
        python \
        make \
        g++
 
ADD . /src
WORKDIR /src
RUN npm install
# RUN npm run lint
# RUN npm run test
RUN npm run build
RUN npm prune --production

 
 
FROM node:10-alpine
 
RUN apk add --update --no-cache curl
 
ENV PORT=3000
EXPOSE $PORT
 
ENV DIR=/usr/src/service
WORKDIR $DIR
 
COPY --from=build /src/package.json package.json
COPY --from=build /src/package-lock.json package-lock.json
COPY --from=build /src/node_modules node_modules
COPY --from=build /src/.next .next
 
HEALTHCHECK --interval=5s \
            --timeout=5s \
            --retries=6 \
            CMD curl -fs http://localhost:$PORT/_health || exit 1
 
CMD ["npm", "run", "start"]
 
All I did was comment out the “lint” and “test” commands, as we have not configured linting or testing, yet.
 
 
Next, I changed /dist to /.next as that’s where next’s build output is, and updated the command to “npm run start”.
 
 
THAT’S IT!
 
 
That wasn't so bad, was it?
 
 
One of the most powerful things you can do as an engineer, is have templates that you can look up and use whenever you come across the same problem again.
 
 
If you have Node.js app’s or service’s, the Dockerfile above will generally only need small modifications.
 
 
I recommend heading over to gist.github.com and saving yourself a copy of this “Multi-stage Node.js build”. This build will work on pretty much any Node.js project with a few small tweaks.
 
 
Ready to run it!?
 
 
First, build the container.

docker build -t my-container .
 
As a result, you will have a brand new, (almost) production-ready image of a container! (I can't call it production ready without tests!)
 
 
Try it out, and give it a run with

docker run -p 3000:3000 my-container

 
Here’s the output from my terminal:
 
 
➜ docker run -p 3000:3000 my-container
> [email protected] start /usr/src/service
> next start
> Ready on http://localhost:3000
 
 
And in my browser...
This container will run the same on ANY machine - whether that’s a server, your laptop, or my laptop. Containers are kinda like virtual machines but they much more efficiently use resources because they are able to share some resources at a deep, internal level while still being isolated.

 
When we now work with the container we've just built, we are no longer dealing with a Node.js app, instead, we are dealing with a container. That means, we can use container orchestration tools to deploy and run our application.
 
 
(Like the cranes can lift containers carrying any cargo.)
 
 
The first “crane” I want to introduce you to is called “docker-compose”
 

Docker Compose is a local orchestration tool.
 
 
It’s most useful for orchestrating containers running on a single machine, like your laptop.
 
 
In my article, I showed how to use it for orchestrating staging tests.
 
 
If that seemed a bit complicated still, don't worry - you’ll get there.
 
 
For today, let’s start with orchestrating a single container as it is the most basic example of orchestrating a container.
 
 
In your project, add a new file docker-compose.yml
 
 
Remember how I talked about codified configurations?
 
 
Codified configurations are great because they reduce the number of command line utilities and flags you need to memorize, ensure that the containers always run in the same way, with the same settings, and make it easy to share with other team members.
 
They are easily reproducible, because, well, that's what code does!
 
 
This is a codified configuration for running the container you built: (on Github)
 
version: '3.4'
 
services:
 
  my-container:
    image: my-container
    ports:
      - 3000:3000
Let's run the same container now using docker-compose instead of docker.

If your other container is still running, press CTRL+C to stop it.

 
Then, running docker-compose up should yield the same results as before!
 
 
➜ split-test-tutorial git:(Dockerfile) ✗ docker-compose up
Creating network "splittesttutorial_default" with the default driver
Creating splittesttutorial_my-container_1 ... done
Attaching to splittesttutorial_my-container_1
my-container_1 |
my-container_1 | > [email protected] start /usr/src/service
my-container_1 | > next start
my-container_1 |
my-container_1 | > Ready on http://localhost:3000
 
Pretty neat, right?
 
As you saw a glimpse of in my article, this stuff is the base of some really powerful techniques that can save you A TON of time. It can also greatly improve your testing, code quality, and deployment processes!
 
And of course, you’re probably wondering “What about running it in production? Can I use docker-compose for that, too?”
 
Woahhh slow down there, tiger! Is this even the best way to run our apps locally?... or is there a better way?
 
Hint... There's a better way - sometimes.
 
More soon...
 
Until next time!,
Patrick "The Orchestrator" Scott