Migrating Existing Services to Docker - Part Three

When we last left off, we had set up a docker-compose.yml file that allows us to have multiple Nginx containers that are wired up to an HAProxy container for load balancing and traffic distribution. At this point, though, there's no way for us to enable SSL encryption. Our containers serve HTTP, and that's it. Let's change that.

For us to be able to encrypt our traffic, we need to do a few things:

  • Obtain certificates from our choice of provider.
  • Configure our docker-compose.yml file to pass said certificates to our Nginx containers.
  • Configure our Nginx containers to make use of those certificates.
  • Configure our HAProxy container to use TCP passthrough to our Nginx containers.
  • Add a new Nginx container and configure HAProxy to handle LetsEncrypt requests separately (extra credit).

The reason we're doing passthrough to our containers, rather than SSL termination at our HAProxy container, is so that we can easily use different SSL certificates for each of our Nginx containers.

1. The easy part.

The first step is to get our SSL certificates. I've chosen to use LetsEncrypt for this, because it's super simple to get started. You simply download their certbot client, and you can get your certificates right away. If you've already got your certificates, or want to use another provider, that's fine. You'll just want to pick up from step 1a below.
Note: If you still have your web stack running, you may run into errors when you run certbot, as port 80 will already be in use. To solve this, you can either stop your web stack with docker-compose down from the folder where your docker-compose.yml file is, or you can specify the proper webroot to the certbot command. We'll be solving this problem for good later in the post.

Once your certificates are downloaded, there are a few places they go, and you'll need to make note of these places for later. By default, symlinks to the certificate files are placed in /etc/letsencrypt/live/[domain.com]/, while the actual files are stored in /etc/letsencrypt/archive/[domain.com]/.

1a. dhparam.pem

Besides knowing where our certificate files are located, there is one more file we need before we move on: dhparam.pem. This is required for our Nginx SSL configuration, and you can generate it using the following command from somewhere outside of your web root(s): openssl dhparam -out dhparam.pem 4096. Depending on your server's processing power, the command might take a little while to run. If it's taking too long for you, and you're alright with sacrificing a little bit of security, you can swap out the number of bits (4096) for 2048 or 1024. The dhparam.pem file allows you to use ciphers that have forward secrecy, and using a strong dhparam.pem will protect you from attacks such as Logjam.

2. Editing our docker-compose configuration

Now that we've got our certificate files ready, as well as our dhparam.pem file, we can move on to changing our docker-compose.yml file to point our containers at the proper locations.

docker-compose.yml

version: '2'  
services:  
################################
#         Nginx Websites       #
################################
  akpwebdesign.com:
    image: nginx:alpine
    restart: always
    expose:
      - 443
    volumes:
      - /srv/websites/nginx.conf:/etc/nginx/nginx.conf:ro
      - /srv/websites/dhparam.pem:/etc/nginx/dhparam.pem:ro
      - /srv/websites/akpwebdesign.com/:/usr/share/nginx/html/:ro
      - /etc/letsencrypt/live/akpwebdesign.com/:/usr/share/nginx/ssl/
      - /etc/letsencrypt/archive/akpwebdesign.com/:/usr/share/archive/akpwebdesign.com/

  customersite.com:
    image: nginx:alpine
    restart: always
    expose:
      - 443
    volumes:
      - /srv/websites/nginx.conf:/etc/nginx/nginx.conf:ro
      - /srv/websites/dhparam.pem:/etc/nginx/dhparam.pem:ro
      - /srv/websites/customersite.com/:/usr/share/nginx/html/:ro
      - /etc/letsencrypt/live/customersite.com/:/usr/share/nginx/ssl/
      - /etc/letsencrypt/archive/customersite.com/:/usr/share/archive/customersite.com/

################################
#            Extras            #
################################
  haproxy:
    image: haproxy:alpine
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /srv/websites/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
      - /srv/websites/haproxy/errors/:/usr/local/etc/haproxy/errors/:ro
      - /dev/log:/dev/log
    links:
      - akpwebdesign.com
      - customersite.com:cust01

As you can see, I've made just a few changes to each of our Nginx containers, as well as a change to our HAProxy container. To the Nginx containers, I've added the dhparam.pem file that I generated, as well as both of the locations of the LetsEncrypt certificate files. The reason that both are needed is because if you only link the live/ folder, your container will see symlinks, but the files that the symlinks refer to will not be available. That also should explain the strange way in which I've set up those volumes.

The HAProxy container has been enabled for port 443 on top of the existing port 80, and I've swapped out port 80 for port 443 on the Nginx containers.

2. Nginx configuration

For the Nginx configuration, I've chosen to start with a pretty basic setup which simply loads in our SSL certificates and uses Mozilla's suggested "Moderate" cipher suite for nginx.

nginx.conf

events {  
  worker_connections  4096;  ## Default: 1024
}

http {  
  include mime.types;
  gzip on;
  server_tokens off;

  set_real_ip_from 172.17.0.0/24;
  real_ip_header X-Forwarded-For;

  access_log /var/log/nginx/access.log;

  server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
    ssl_certificate /usr/share/nginx/ssl/fullchain.pem;
    ssl_certificate_key /usr/share/nginx/ssl/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
    ssl_dhparam /etc/nginx/dhparam.pem;

    # intermediate configuration. tweak to your needs.
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
    ssl_prefer_server_ciphers on;

    # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
    add_header Strict-Transport-Security max-age=15768000;

    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;

    ## verify chain of trust of OCSP response using Root CA and Intermediate certs
    ssl_trusted_certificate /usr/share/nginx/ssl/fullchain.pem;

    resolver 8.8.8.8 8.8.4.4 valid=300s; # I'm using Google's public DNS. Feel free to use whatever DNS resolver you'd like.

    root /usr/share/nginx/html;
  }
}

You might have noticed that I've removed the section of our nginx.conf that listens on port 80 completely. This is because I'm going to set up HAProxy to redirect to SSL for us, so the Nginx containers will never need to worry about that.

There are a lot of other things we can do with the Nginx configuration regarding SSL, including Public Key Pinning and more HTTP Strict Transport Security options. I'll leave those as exercises for you, but feel free to reach out via the comments below if you'd like some more information about that.

3. HAProxy configuration

The HAProxy container will be set up to do TCP passthrough to our Nginx containers. This allows the Nginx containers to be the ones to send our SSL certificates, so we don't need to have them all installed on the HAProxy container. One downside to this method is that HAProxy is unable to inject the X-Forwarded-For header, so do not use this method if it is imperative that you can easily log the IP addresses of traffic to your site.

haproxy.cfg

global  
  log /dev/log local0
  log /dev/log local1 info
  maxconn 2048

defaults  
  log global
  mode http
  option httplog
  option dontlognull
  option forwardfor
  option http-server-close
  timeout connect 5000
  timeout client 50000
  timeout server 50000
  errorfile 400 /usr/local/etc/haproxy/errors/400.http
  errorfile 403 /usr/local/etc/haproxy/errors/403.http
  errorfile 408 /usr/local/etc/haproxy/errors/408.http
  errorfile 500 /usr/local/etc/haproxy/errors/500.http
  errorfile 502 /usr/local/etc/haproxy/errors/502.http
  errorfile 503 /usr/local/etc/haproxy/errors/503.http
  errorfile 504 /usr/local/etc/haproxy/errors/504.http

#######################
# Non-Secure Frontend #
#######################
frontend www
  bind 0.0.0.0:80
  option http-server-close
  option forwardfor

  # Redirect AKP Web Design sites to SSL.
  redirect scheme https code 301 if { hdr(host) -i akpwebdesign.com } !{ ssl_fc }

  # Redirect customer sites to SSL.
  redirect scheme https code 301 if { hdr(host) -i customersite.com } !{ ssl_fc }

  # No default_backend, as certs would be broken anyway.
  # default_backend null

################
# SSL Frontend #
################
frontend ssl
  mode tcp
  option tcplog

  bind 0.0.0.0:443

  option socket-stats
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  # AKP Web Design sites
  use_backend akp if { req_ssl_sni -i akpwebdesign.com }

  # Customer sites
  use_backend customer if { req_ssl_sni -i customersite.com }

  # No default_backend, as certs would be broken anyway.
  # default_backend null

####################
# akpwebdesign.com #
####################
backend akp  
  mode tcp

  # maximum SSL session ID length is 32 bytes.
  stick-table type binary len 32 size 30k expire 30m

  acl clienthello req_ssl_hello_type 1
  acl serverhello rep_ssl_hello_type 2

  # use tcp content accepts to detects ssl client and server hello.
  tcp-request inspect-delay 5s
  tcp-request content accept if clienthello

  # no timeout on response inspect delay by default.
  tcp-response content accept if serverhello

  stick on payload_lv(43,1) if clienthello

  # Learn on response if server hello.
  stick store-response payload_lv(43,1) if serverhello

  option ssl-hello-chk

  server web1 akpwebdesign.com:443

####################
# customersite.com #
####################
backend customer  
  mode tcp

  # maximum SSL session ID length is 32 bytes.
  stick-table type binary len 32 size 30k expire 30m

  acl clienthello req_ssl_hello_type 1
  acl serverhello rep_ssl_hello_type 2

  # use tcp content accepts to detects ssl client and server hello.
  tcp-request inspect-delay 5s
  tcp-request content accept if clienthello

  # no timeout on response inspect delay by default.
  tcp-response content accept if serverhello

  stick on payload_lv(43,1) if clienthello

  # Learn on response if server hello.
  stick store-response payload_lv(43,1) if serverhello

  option ssl-hello-chk

  server web1 cust01:443

What this configuration does differently from the previous one is that it redirects any standard HTTP traffic to SSL for us, then once the traffic hits the SSL endpoint, the configuration checks for req_ssl_sni rather than the Host header, since we can't see the Host header in TCP mode. This configuration purposefully does not specify a default_backend for either endpoint, as we don't have a standard HTTP backend to use as default for it, and if we set a default backend for the SSL traffic, the certificates sent along would be broken anyway.

We now have a fully SSL web stack. We can stop now and be done with it, but I wanted to throw in one more little 'quality-of-life' fix for the stack.

4. Extra Credit: LetsEncrypt Nginx Container

This container and its related configuration changes to HAProxy will allow us to renew and create SSL certificates without bringing down our whole stack to do so, and without having to specify a custom webroot for each set of certs we need.

First things first, let's add the following set configuration to the docker-compose.yml file.

docker-compose.yml (excerpt)

  letsencrypt:
    image: nginx:alpine
    restart: always
    expose:
      - 80
    volumes:
      - /srv/websites/nginx-LE.conf:/etc/nginx/nginx.conf:ro
      - /srv/websites/LetsEncrypt/html/:/usr/share/nginx/html/:ro

You will also need to add a new link to the HAProxy container configuration. You can simply add - letsencrypt:le to what's already there.

The LetsEncrypt Nginx container is going to be provided a special configuration file (nginx-LE.conf, rather than our standard nginx.conf), and will need its own webroot as well. I've created a mostly empty index.html file to put in that webroot, which is what will be seen by anyone who tries to reach your server with an incorrect hostname.

nginx-LE.conf

events {
  worker_connections  4096;  ## Default: 1024
}

http {
  include mime.types;
  gzip on;
  server_tokens off;

  set_real_ip_from 172.17.0.0/24;
  real_ip_header X-Forwarded-For;
  
  access_log /var/log/nginx/access.log;

  server {

    listen 80 default_server;
    listen [::]:80 default_server;

    root /usr/share/nginx/html;
  }
}

index.html

<!DOCTYPE html>
<html>
<head>
	<title>404 Website Not Found</title>
</head>
<body>
	<p>There is no content available here. Please try again later.</p>
</body>
</html>

The last changes that are needed are to our HAProxy configuration. In the frontend www section of haproxy.cfg, we have to specify not to redirect to SSL if the path being accessed begins with /.well-known/acme-challenge. We also need to define a default_backend, as well as forcing HAProxy to use the LetsEncrypt backend if the path being accessed begins with /.well-known/acme-challenge.

haproxy.cfg (excerpt)

frontend www
  bind 0.0.0.0:80
  option http-server-close
  option forwardfor

  # Redirect AKP Web Design sites to SSL.
  redirect scheme https code 301 if { hdr(host) -i akpwebdesign.com } !{ ssl_fc } !{ path_beg /.well-known/acme-challenge }

  # Redirect customer sites to SSL.
  redirect scheme https code 301 if { hdr(host) -i customersite.com } !{ ssl_fc } !{ path_beg /.well-known/acme-challenge }

  # LetsEncrypt backend, no SSL.
  use_backend le if { path_beg /.well-known/acme-challenge }

  # Default backend.
  default_backend le

Next, we need to define the le backend, so HAProxy knows where to send that traffic. This part is just a simple HTTP backend, which you can add at the bottom of your file. Note: an earlier version of this post said to redirect to the LetsEncrypt backend if the path began with /.well-known/, which is not specific enough. Thanks to Reddit user /u/tialaramex for providing me with more information.

haproxy.cfg (excerpt)

###############
# LetsEncrypt #
###############
backend le
  mode http
  server web1 le:80

And that's it! Once we start (or restart) our web stack using docker-compose up -d, we will be running SSL-only, except for the LetsEncrypt container, which is running on port 80 and accepts traffic for certbot. Now, when you run certbot, you can specify --webroot -w /srv/websites/LetsEncrypt, and you won't have to shut down your whole web stack just to renew certificates.

For posterity, here's the entire haproxy.cfg file as it currently stands.

haproxy.cfg

global  
  log /dev/log local0
  log /dev/log local1 info
  maxconn 2048

defaults  
  log global
  mode http
  option httplog
  option dontlognull
  option forwardfor
  option http-server-close
  timeout connect 5000
  timeout client 50000
  timeout server 50000
  errorfile 400 /usr/local/etc/haproxy/errors/400.http
  errorfile 403 /usr/local/etc/haproxy/errors/403.http
  errorfile 408 /usr/local/etc/haproxy/errors/408.http
  errorfile 500 /usr/local/etc/haproxy/errors/500.http
  errorfile 502 /usr/local/etc/haproxy/errors/502.http
  errorfile 503 /usr/local/etc/haproxy/errors/503.http
  errorfile 504 /usr/local/etc/haproxy/errors/504.http

#######################
# Non-Secure Frontend #
#######################
frontend www
  bind 0.0.0.0:80
  option http-server-close
  option forwardfor

  # Redirect AKP Web Design sites to SSL.
  redirect scheme https code 301 if { hdr(host) -i akpwebdesign.com } !{ ssl_fc } !{ path_beg /.well-known/acme-challenge }

  # Redirect customer sites to SSL.
  redirect scheme https code 301 if { hdr(host) -i customersite.com } !{ ssl_fc } !{ path_beg /.well-known/acme-challenge }

  # LetsEncrypt backend, no SSL.
  use_backend le if { path_beg /.well-known/acme-challenge }

  # Default backend.
  default_backend le

################
# SSL Frontend #
################
frontend ssl
  mode tcp
  option tcplog

  bind 0.0.0.0:443

  option socket-stats
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  # AKP Web Design sites
  use_backend akp if { req_ssl_sni -i akpwebdesign.com }

  # Customer sites
  use_backend customer if { req_ssl_sni -i customersite.com }

  # No default_backend, as certs would be broken anyway.
  # default_backend null

####################
# akpwebdesign.com #
####################
backend akp  
  mode tcp

  # maximum SSL session ID length is 32 bytes.
  stick-table type binary len 32 size 30k expire 30m

  acl clienthello req_ssl_hello_type 1
  acl serverhello rep_ssl_hello_type 2

  # use tcp content accepts to detects ssl client and server hello.
  tcp-request inspect-delay 5s
  tcp-request content accept if clienthello

  # no timeout on response inspect delay by default.
  tcp-response content accept if serverhello

  stick on payload_lv(43,1) if clienthello

  # Learn on response if server hello.
  stick store-response payload_lv(43,1) if serverhello

  option ssl-hello-chk

  server web1 akpwebdesign.com:443

####################
# customersite.com #
####################
backend customer  
  mode tcp

  # maximum SSL session ID length is 32 bytes.
  stick-table type binary len 32 size 30k expire 30m

  acl clienthello req_ssl_hello_type 1
  acl serverhello rep_ssl_hello_type 2

  # use tcp content accepts to detects ssl client and server hello.
  tcp-request inspect-delay 5s
  tcp-request content accept if clienthello

  # no timeout on response inspect delay by default.
  tcp-response content accept if serverhello

  stick on payload_lv(43,1) if clienthello

  # Learn on response if server hello.
  stick store-response payload_lv(43,1) if serverhello

  option ssl-hello-chk

  server web1 cust01:443

###############
# LetsEncrypt #
###############
backend le
  mode http
  server web1 le:80

In the next post, I'll detail how we can add Node.js applications to our stack, while using Nginx as a reverse proxy to add SSL support.

Austin Peterson

Read more posts by this author.