This is a fork of the original jwilder's jwilder/nginx-proxy The only change I made was I added ldap auth support. I borrowed a lot from h3nrik/nginx-ldap But I took a different approach to building the ldap auth module. I download the source deb for the same version in jwilder uses, add the ldap auth module, then build and install the new deb.
My goal is to harmonize with jwilder's project.
To configure ldap auth, I added the ldap server configuration to information to the /etc/nginx/proxy.conf file, then created a vhost.d/default_location file with auth_ldap "Forbidden"; auth_ldap_servers ldapserver;
TODO:
- Clean up the files that were downloaded
- Remove dev tools
- add sha checking on downloads that don't come from apt
- Modify nginx template to check for a ladp-auth file.
- Modify nginx template to not use ldap-auth unless they are using ssl (maybe create an override for this)
Below if the original README from jwilder's project.
nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
See Automated Nginx Reverse Proxy for Docker for why you might want to use this.
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
The containers being proxied must expose the port to be proxied, either by using the EXPOSE
directive in their Dockerfile
or by using the --expose
flag to docker run
or docker create
.
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
container_name: whoami
environment:
- VIRTUAL_HOST=whoami.local
$ docker-compose up
$ curl -H "Host: whoami.local" localhost
I'm 5b129ab83266
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
If you need to support multiple virtual hosts for a container, you can separate each entry with commas. For example, foo.bar.com,baz.bar.com,bar.com
and each host will be setup the same.
You can also use wildcards at the beginning and the end of host name, like *.bar.com
or foo.bar.*
. Or even a regular expression, which can be very useful in conjunction with a wildcard DNS service like xip.io, using ~^foo\.bar\..*\.xip\.io
will match foo.bar.127.0.0.1.xip.io
, foo.bar.10.0.2.2.xip.io
and all other given IPs. More information about this topic can be found in the nginx documentation about server_names
.
With the addition of overlay networking in Docker 1.9, your nginx-proxy
container may need to connect to backend containers on multiple networks. By default, if you don't pass the --net
flag when your nginx-proxy
container is created, it will only be attached to the default bridge
network. This means that it will not be able to connect to containers on networks other than bridge
.
If you want your nginx-proxy
container to be attached to a different network, you must pass the --net=my-network
option in your docker create
or docker run
command. At the time of this writing, only a single network can be specified at container creation time. To attach to other networks, you can use the docker network connect
command after your container is created:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro \
--name my-nginx-proxy --net my-network jwilder/nginx-proxy
$ docker network connect my-other-network my-nginx-proxy
In this example, the my-nginx-proxy
container will be connected to my-network
and my-other-network
and will be able to proxy to other containers attached to those networks.
If you would like to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https
on the backend container.
If you would like to connect to uWSGI backend, set VIRTUAL_PROTO=uwsgi
on the
backend container. Your backend container should than listen on a port rather
than a socket and expose that port.
To set the default host for nginx use the env var DEFAULT_HOST=foo.bar.com
for example
$ docker run -d -p 80:80 -e DEFAULT_HOST=foo.bar.com -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
nginx-proxy can also be run as two separate containers using the jwilder/docker-gen image and the official nginx image.
You may want to do this to prevent having the docker socket bound to a publicly exposed container service.
You can demo this pattern with docker-compose:
$ docker-compose --file docker-compose-separate-containers.yml up
$ curl -H "Host: whoami.local" localhost
I'm 5b129ab83266
To run nginx proxy as a separate container you'll need to have nginx.tmpl on your host system.
First start nginx with a volume:
$ docker run -d -p 80:80 --name nginx -v /tmp/nginx:/etc/nginx/conf.d -t nginx
Then start the docker-gen container with the shared volume and template:
$ docker run --volumes-from nginx \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
-v $(pwd):/etc/docker-gen/templates \
-t jwilder/docker-gen -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
Finally, start your containers with VIRTUAL_HOST
environment variables.
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
SSL is supported using single host, wildcard and SNI certificates using naming conventions for certificates or optionally specifying a cert name (for SNI) as an environment variable.
To enable SSL:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
The contents of /path/to/certs
should contain the certificates and private keys for any virtual
hosts in use. The certificate and keys should be named after the virtual host with a .crt
and
.key
extension. For example, a container with VIRTUAL_HOST=foo.bar.com
should have a
foo.bar.com.crt
and foo.bar.com.key
file in the certs directory.
If you have Diffie-Hellman groups enabled, the files should be named after the virtual host with a
dhparam
suffix and .pem
extension. For example, a container with VIRTUAL_HOST=foo.bar.com
should have a foo.bar.com.dhparam.pem
file in the certs directory.
Wildcard certificates and keys should be named after the domain name with a .crt
and .key
extension.
For example VIRTUAL_HOST=foo.bar.com
would use cert name bar.com.crt
and bar.com.key
.
If your certificate(s) supports multiple domain names, you can start a container with CERT_NAME=<name>
to identify the certificate to be used. For example, a certificate for *.foo.com
and *.bar.com
could be named shared.crt
and shared.key
. A container running with VIRTUAL_HOST=foo.bar.com
and CERT_NAME=shared
will then use this shared cert.
The SSL cipher configuration is based on mozilla nginx intermediate profile which should provide compatibility with clients back to Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7. The configuration also enables HSTS, and SSL session caches.
The default behavior for the proxy when port 80 and 443 are exposed is as follows:
- If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS is always preferred when available.
- If the container does not have a usable cert, a 503 will be returned.
Note that in the latter case, a browser may get an connection error as no certificate is available
to establish a connection. A self-signed or generic cert named default.crt
and default.key
will allow a client browser to make a SSL connection (likely w/ a warning) and subsequently receive
a 503.
To serve traffic in both SSL and non-SSL modes without redirecting to SSL, you can include the
environment variable HTTPS_METHOD=noredirect
(the default is HTTPS_METHOD=redirect
). You can also
disable the non-SSL site entirely with HTTPS_METHOD=nohttp
. HTTPS_METHOD
must be specified
on each container for which you want to override the default behavior. If HTTPS_METHOD=noredirect
is
used, Strict Transport Security (HSTS) is disabled to prevent HTTPS users from being redirected by the
client. If you cannot get to the HTTP site after changing this setting, your browser has probably cached
the HSTS policy and is automatically redirecting you back to HTTPS. You will need to clear your browser's
HSTS cache or use an incognito window / different browser.
In order to be able to secure your virtual host, you have to create a file named as its equivalent VIRTUAL_HOST variable on directory /etc/nginx/htpasswd/$VIRTUAL_HOST
$ docker run -d -p 80:80 -p 443:443 \
-v /path/to/htpasswd:/etc/nginx/htpasswd \
-v /path/to/certs:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
You'll need apache2-utils on the machine where you plan to create the htpasswd file. Follow these instructions
If you need to configure Nginx beyond what is possible using environment variables, you can provide custom configuration files on either a proxy-wide or per-VIRTUAL_HOST
basis.
If you want to replace the default proxy settings for the nginx container, add a configuration file at /etc/nginx/proxy.conf
. A file with the default settings would
look like this:
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
NOTE: If you provide this file it will replace the defaults; you may want to check the .tmpl file to make sure you have all of the needed options.
NOTE: The default configuration blocks the Proxy
HTTP request header from being sent to downstream servers. This prevents attackers from using the so-called httpoxy attack. There is no legitimate reason for a client to send this header, and there are many vulnerable languages / platforms (CVE-2016-5385
, CVE-2016-5386
, CVE-2016-5387
, CVE-2016-5388
, CVE-2016-1000109
, CVE-2016-1000110
, CERT-VU#797896
).
To add settings on a proxy-wide basis, add your configuration file under /etc/nginx/conf.d
using a name ending in .conf
.
This can be done in a derived image by creating the file in a RUN
command or by COPY
ing the file into conf.d
:
FROM jwilder/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 100m;'; \
} > /etc/nginx/conf.d/my_proxy.conf
Or it can be done by mounting in your custom configuration in your docker run
command:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
To add settings on a per-VIRTUAL_HOST
basis, add your configuration file under /etc/nginx/vhost.d
. Unlike in the proxy-wide case, which allows multiple config files with any name ending in .conf
, the per-VIRTUAL_HOST
file must be named exactly after the VIRTUAL_HOST
.
In order to allow virtual hosts to be dynamically configured as backends are added and removed, it makes the most sense to mount an external directory as /etc/nginx/vhost.d
as opposed to using derived images or mounting individual configuration files.
For example, if you have a virtual host named app.example.com
, you could provide a custom configuration for that host as follows:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/vhost.d:/etc/nginx/vhost.d:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
$ { echo 'server_tokens off;'; echo 'client_max_body_size 100m;'; } > /path/to/vhost.d/app.example.com
If you are using multiple hostnames for a single container (e.g. VIRTUAL_HOST=example.com,www.example.com
), the virtual host configuration file must exist for each hostname. If you would like to use the same configuration for multiple virtual host names, you can use a symlink:
$ { echo 'server_tokens off;'; echo 'client_max_body_size 100m;'; } > /path/to/vhost.d/www.example.com
$ ln -s /path/to/vhost.d/www.example.com /path/to/vhost.d/example.com
If you want most of your virtual hosts to use a default single configuration and then override on a few specific ones, add those settings to the /etc/nginx/vhost.d/default
file. This file
will be used on any virtual host which does not have a /etc/nginx/vhost.d/{VIRTUAL_HOST}
file associated with it.
To add settings to the "location" block on a per-VIRTUAL_HOST
basis, add your configuration file under /etc/nginx/vhost.d
just like the previous section except with the suffix _location
.
For example, if you have a virtual host named app.example.com
and you have configured a proxy_cache my-cache
in another custom file, you could tell it to use a proxy cache as follows:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/vhost.d:/etc/nginx/vhost.d:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
$ { echo 'proxy_cache my-cache;'; echo 'proxy_cache_valid 200 302 60m;'; echo 'proxy_cache_valid 404 1m;' } > /path/to/vhost.d/app.example.com_location
If you are using multiple hostnames for a single container (e.g. VIRTUAL_HOST=example.com,www.example.com
), the virtual host configuration file must exist for each hostname. If you would like to use the same configuration for multiple virtual host names, you can use a symlink:
$ { echo 'proxy_cache my-cache;'; echo 'proxy_cache_valid 200 302 60m;'; echo 'proxy_cache_valid 404 1m;' } > /path/to/vhost.d/app.example.com_location
$ ln -s /path/to/vhost.d/www.example.com /path/to/vhost.d/example.com
If you want most of your virtual hosts to use a default single location
block configuration and then override on a few specific ones, add those settings to the /etc/nginx/vhost.d/default_location
file. This file
will be used on any virtual host which does not have a /etc/nginx/vhost.d/{VIRTUAL_HOST}
file associated with it.
Before submitting pull requests or issues, please check github to make sure an existing issue or pull request is not already open.
To run tests, you'll need to install bats 0.4.0.
make test