My journay during Black Hat conference 28 November 2018

Alright!
 
 
We’ve Dockerized a Node.js app, ran it using docker-compose, and explored development orchestration!
 
 
I hope you’re starting to see how much time can be saved by using containers. 
 
It may seem like a lot up front, but it repays in leaps and bounds later on.
 
 
It offers improvements at literally every stage of the software development lifecycle.
 
 
It allows you to build software that runs consistently, no matter the environment.
 
 
It allows you to release faster, and with more confidence.
 
 
And, it allows you to stand on the shoulders of giants, by making use of containers that others have already perfected!
 
 
As much as we’ve covered, we’ve only scratched the surface.
 
Something I often say to people: "The more you know, the more you know you don't know!"
 
 
I want to get you moving on to running services in production on AWS, and can only fit so much in my free course!
 
 
We still have yet to touch secret management, configuration management, haven’t even got close to logging or monitoring (and we probably won’t in this mini-course).
 
 
We haven’t talked about API gateways to secure your public facing APIs, or how to easily set up single sign on for all of your apps. Again, not something we’ll have time to cover in the mini-series!
 
There is one thing we simply can not skip.
 
 
I firmly believe that proxying is one of the most powerful techniques you can learn.
 
 
Think about this:
 
 
To google, "www.yoursite.com" is not the same site as "yoursite.com".
 
 
Proxies allow you to host multiple applications at the same domain.
 
 
This means you could CRUSH SEO by, for example, hosting your blog at www.yoursite.com/blog/ instead of blog.yoursite.com.
 
 
This ALONE is worth using a reverse proxy, cause you haven't mastered internet, unless you've also mastered traffic.
 
 
But then throw in that it's basically essential due to constraints of running a cluster (you only have one of each port.. more later), and it can also solve scaling issue for difficult to scale applications such as server rendered react, or serve static sites.
 
It's something you can't really live without once you start using a cluster.
 
 
There are quite a few options for proxies, and we’ll set one up as the last part of our development tooling lessons. There are more traditional, battle-tested, highly-scalable proxies, and newer tools like `express-gateway` that have the ability to proxy, but also much more.
 
 
We’re going to use the popular proxy, Nginx for this one. I want to use the simplest config possible to demonstrate, while also showing off some of it’s power.
 
 
When scaling Node.js applications, for example, it’s quite easy to add compression middleware. However, this actually doesn’t scale very well due to Node's blocking nature.
 
 
It’s better to offload that task to a high-throughput optimized web server, like Nginx.
 
 
Therefore in our example, I’ll simply show how to enable gzip compression of our service with nginx.
 
 
Using a reverse proxy has many implications and potential benefits for running all sorts of services, and, creating uniformity between microservices with a common routing schema.
 
 
This one use case is only the tip of the iceberg of proxies... We'll see another proxy later in GDD.
 
 
Let’s get to the example.
 
 
To do so, we will want another private network as well.
 
 
Here’s our new stack:
 
version: '3.4'

 
networks:
  mongo:
 
  proxy:
 
services:
 
  proxy:
    image: nginx
    ports:
      - 80:80
    networks:
      - proxy
    command: |
      bash -c 'bash -s <
        cat > /etc/nginx/nginx.conf <
          daemon off;
          events {}
          http {
            gzip on;
 
            server {
              listen 80;
              location / {
                proxy_pass http://my-container:3000;
              }
            }
          }
      EON
      nginx
      EOF'
 
  my-container:
    image: node:10
    volumes:
      - .:/usr/src/svc
    working_dir: /usr/src/svc
    command: bash -c "npm i && npm run dev"
    environment:
      - MONGO_URL=mongodb://mymongodb:27017/mydbname
    networks:
      - mongo
      - proxy
 
  mymongodb:
    image: mongo
    networks:
      - mongo
    volumes:
      - mongo:/data/db
 
volumes:
  mongo:
* If you're like me, the command: | part of the config probably made your head explode. That's coming up soon * 
 
First, we are no longer exposing port 3000 of `my-container`. Instead we are joining a newly declared `proxy` network. Also new on the proxy network is a new service named `proxy`.

 
Proxy is exposing port 80.
 
 
Port 80 is significant. It is the port that HTTP is served over. If you connect to a website at http://example.com you ARE connecting through port 80. 
 
 
Seems how our cluster can expose only one port 80, this means we’d only be able to expose one of our services! Using a reverse proxy allows us to expose many through different domain names or paths on our single port 80.
 
 
You can serve one application at /, and another at /different-app for example, all on the same domain (which can be great for SEO).
 
 
Through the proxy, you can also decide to serve one application at www.example.com, and another at awesome.example.com.
 
 
Do you know which port HTTPS is served through?
 
(I'll get to that later, let's focus!)
 
 
Back to our nginx config. You’ll notice the “command” portion is a bit complex. 
 
This is just because we are using a little YAML and unix magic to avoid creating a new configuration file, and instead just doing it inline.
 
In YAML, the pipe character " | " signifies the start of a literal multiline string.
 
You could also start a condensed multiline string using the ">" character.
 
We are using EOF and EON to read until end of file of a file that we are creating on the fly that is piped into the file using the contents between EON.
 
 
If you have more complex configuration’s you’ll probably want to maintain a configuration file instead, and "bake" it into the image, or mount it in using a volume.
 
 
Let’s zoom in on just the nginx configuration.
 
   daemon off;
   events {}
   http {
      gzip on;
 
      server {
        listen 80;
        location / {
          proxy_pass http://my-container:3000;
        }
      }
   }
The most important piece I want you to notice for now is the “proxy_pass” line in the `location /` declaration.

 
This is where this magic happens.
 
 
When we send a request to http://localhost/, we will hit port 80, defined by the line `listen 80` and exposed in the service, which is our nginx container.
 
Nginx will then match the path `/` against the `location /` configuration, which tells the proxy to pass the request along to `my-container:3000`.
 
 
As an additional benefit, our service is now running on a private network, and is never directly exposed to the public internet. This greatly improves the security of our systems.
 
 
The tiny line `gzip on` enables compression for the pages served by nginx.
 
 
And we're running a proxy! The hardest part was unix! :) 
 
 
This wraps up the development orchestration section of GDD. Next, we’re going to enter the process of moving your services into production!!
 
 
This seems like a good checkpoint to let you know, you can grab the most up to date version of the code right here: 
 
 
 
Note that it's on the "docker-compose/proxy" branch. If you clone it locally, you'll need to check it out.
 
git fetch -p
git checkout docker-compose/proxy
 
NEXT UP - PRODUCTION!
 
 
Anyone excited? I am!
 
 
And I hope you are too!
 
If you haven't been going along with the exercises, I beg of you, take some time before and continuing and try out some of these development orchestration techniques.
 
Learning without a change in behavior is a waste of time, and just hoarding knowledge. You have not learned if you do not change your behavior.
 
 
Adios.
 
Patrick “Reverse Proxy” Scott
P. S. If this part seemed a bit more complicated than some of the others, don’t sweat it!

 
Most of the time, in modern DevOps set ups, you don’t even need to control proxies directly - you can let the orchestrator or a third party service, watch your services and deal with it for you.
 
 
It’s an advanced topic that I'm cover in my upcoming DevOps Bliss course, and it’s ridiculously powerful.
 
P.P.S. If you connect to HTTPS you are connecting through port 443.