Last time, I discussed setting up OpenVPN so I could access the systems on my home network remotely. One of the systems I use is nginx. I use it primarily as a reverse proxy to my others systems (gogs, drone, etc). The reasoning behind this is that I can use nginx as a sort of TLS termination proxy to all of my others systems. This way I only have to manage my certificates in one place. Using docker private networks allows me to ensure that the systems are only accessed via nginx as well.

I’m going to use a few variables in the commands below to simplify their re-use.

# This is the private network name.
export DOCKER_NETWORK=marshians

# This is our local DNS server.
export LOCAL_DNS=192.168.0.1

# These are the cert/key files for our certificates.
export CERT=/path/to/your/cert.crt
export KEY=/path/to/your/key.key


# Docker Private Networking

Once again, I did all of this on a single Arch Linux server, so not all of the commands may be identical in your case. That being said, the first thing we need to do is setup a private network that nginx can use to communicate to all the other systems. Doing this, none of the other services need to expose any ports to the outside world and the won’t be accessible except through nginx.

docker network create $DOCKER_NETWORK  You can read more about networks in the Docker documentation. # Docker DNS Resolution In my case, I am using local DNS to resolve my host names and I’m referring to those host names in my nginx config. If you are doing the same, you’ll need to tell docker to use a different DNS server when it starts up because it defaults to using public DNS servers that won’t know anything about your internal DNS. I did it this way. sudo -s cat >/etc/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target docker.socket Requires=docker.socket [Service] Type=notify ExecStart=/usr/bin/docker daemon -H fd:// --dns$LOCAL_DNS
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target
^D
exit
sudo systemctl restart docker


Note that you’ll have to replace $LOCAL_DNS here yourself. Basically, all I’m doing is taking the existing service file and adding the –dns flag when starting the daemon. # Nginx Initialization At this point, we can now get started with nginx. The first thing we need to do is create a data volume to store our configuration. With that done, we can start up nginx to load the data volume with the default configurations and then stop it so we can do our own configuring. docker volume create --name nginx-data docker run -d --name=nginx -p 80:80 -p 443:443 --net=$DOCKER_NETWORK --restart=always -v nginx-data:/etc/nginx nginx
docker stop nginx


One thing to point out here is that we are using the –net flag to make sure nginx can talk to the other services we’ll create in there. One other thing we can do to setup our container is to have nginx logs redirect to stdout and stderr. This will send the logs to docker so that docker logs works correctly. This can be done using symlinks.

sudo -s
cd /var/lib/docker/volumes/nginx-data/_data/
mkdir log
cd log
ln -s /dev/stdout access.log
ln -s /dev/stderr error.log
exit


In our configuration, we can now point to /etc/nginx/log/access.log or /etc/nginx/log/error.log to send logs to docker. You’ll see that later below.

# SSL/TLS Certificates

I thought I’d just take a minute here to talk about SSL/TLS certificates. In my case, I have a wildcard certificate. The below configurations are based on that. They should also work if your have a multi-domain certificate. If you plan on using a single certificate for each domain, then you’ll want to copy all of them into the container and use each file in the right place.

If you only have one certificate, you can setup a single SSL/TLS server configuration and then proxy directory locations. For example, /gogs could proxy to your gogs server and /drone could proxy to your drone server. If you aren’t interested in using SSL/TLS at all, you can simply ignore those parts and configuration nginx just to proxy.

I believe it’s possible to use Let’s Encrypt for this but I haven’t taken the time to do it because I still have a valid wildcard certificate. When they expire, I plan on setting that up and I’ll write about that a well.

# Nginx Configuration.

OK, let’s get to configuring. First, we need to copy over our certificate and key. I also make my own dhparam file.

sudo -s
cp $CERT /var/lib/docker/volumes/nginx-data/_data cp$KEY /var/lib/docker/volumes/nginx-data/_data
openssl dhparam -out /var/lib/docker/volumes/nginx-data/_data/dhparam.pem 2048
exit


With those in place, we can now get to the actual configuration. I’ll start with two configuration snippets I use to keep the main configuration simple.

sudo -s
cat >/var/lib/docker/volumes/nginx-data/_data/proxy_params
proxy_set_header Host $host; proxy_set_header X-Real-IP$remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto$scheme;
^D
cat >/var/lib/docker/volumes/nginx-data/_data/ssl_params
ssl_certificate /etc/nginx/star.themarshians.com.crt;
ssl_certificate_key /etc/nginx/star.themarshians.com.key;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /etc/nginx/dhparam.pem;
^D
exit


The _proxyparams are used to setup proxying and the _sslparams is used for the SSL/TLS configuration. Since all of the servers use the same parameters, using these files made it cleaner. Now for the main configuration.

sudo -s
cat >/var/lib/docker/volumes/nginx-data/_data/nginx.conf
user  nginx;
worker_processes  1;

error_log  /etc/nginx/log/error.log warn;
pid        /var/run/nginx.pid;

events {
worker_connections  1024;
}

http {
include       /etc/nginx/mime.types;
default_type  application/octet-stream;

log_format  main  '$remote_addr -$remote_user [$time_local] "$request" '
'$status$body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /etc/nginx/log/access.log main; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; server_name *.themarshians.com; return 301 https://$server_name\$request_uri;
}

server {
listen 443 ssl;
server_name git.themarshians.com;
access_log /etc/nginx/log/access.log;
error_log /etc/nginx/log/error.log;

include /etc/nginx/ssl_params;

location / {
include /etc/nginx/proxy_params;
proxy_pass http://gogs:3000;
}
}

server {
listen 443 ssl;
server_name drone.themarshians.com;
access_log /etc/nginx/log/access.log;
error_log /etc/nginx/log/error.log;

include /etc/nginx/ssl_params;

location / {
include /etc/nginx/proxy_params;
proxy_pass http://drone:8000;
proxy_redirect http://drone:8000 https://drone.themarshians.com;
}
}
}
^D
exit


Most of this stuff is fairly standard. In the first server section I listen on port 80 and redirect any traffic there to the HTTPS equivalent. The second is for my gogs server and the third is for my drone server.

Each of the proxy’s have a unique server_name to which my local DNS resolves. Within the location section of each, the proxy_pass value is the URL to which I want to direct traffic. The names gogs and drone are the docker names of the other containers. In other posts I’ll describe setting up those services but the gist of it is that when you run a container in docker with the –name flag, all other systems within the same network can refer to it by that name. Since all of these containers are within our private network we can use these names and don’t have to worry about IP Addresses changing.

To get drone working, I had to include a _proxyredirect value but otherwise the server configurations are the same. You could add as many of them as you wanted for other services. Perhaps you have a private wiki or some private cloud software. Adding them to the configuration is trivial. Once done, start up the container.

docker start nginx


# Conclusion

Using nginx as a proxy for all your services is a great idea! You can make sure they all have valid certificates from a single place. Add docker private networks into the mix and you can ensure that the systems are only being accessed through the proxy.