Hosting multiple web servers behind a single IP address

Virtual hosts for a website are a thing. One webserver can host multiple websites. They can all be on the same IP address, different IP addresses, different ports, etc.

This post is about using a proxy service. Before I started with this solution, at home I hosted every website on the same server. My firewall would redirect incoming ports 80 and 443 to my webserver, and Nginx/Apache would take care of the rest.

As things evolved, I started to have multiple websites on multiple webservers. For example, the dev, test, stage, and devgit hosts for FreshPorts are all hosted on the slocum server.

By design, each of these hosts is in a different FreeBSD jail. Each jail contains content installed to /usr/local/www/freshports. While I could host each of those websites on the same server, it is easier not to. Thinking about it now, installing them all in one jail would defeat the purpose of that jail: to test the code before it goes to production. Each environment must replicate the production environment for testing purposes.

The solution I picked: proxy

In this post:

  • FreeBSD 12.1
  • Nginx 1.18.0
  • Bind 9.16.6
  • split dns – optional

Why not use different ports?

I could redirect port 8080 to one server, port 8081 to another server, etc. You could then browse to and it would work. I find that awkward and lacks user friendliness. It also isn’t how the websites will be used in production.


In short, a proxy is “a server application or appliance that acts as an intermediary for requests from clients seeking resources from servers that provide those resources.[1] A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server.”

A common use of proxies is for security, and they can be implement at either the server or the client.

In my example, I’m using Nginx, because it seemed to be the easiest to implement and was recommended to me at the time.

The servers in this example

In this example, the services (and their hostnames) are:

  1. firewall (bast) is at
  2. proxy (serpico) is at
  3. webserver ( is at
  4. resolves to

I am using split DNS here, but you don’t have to. If you don’t want to, you can use the old approach I used before today. That appears in a section close to the end of this post.

Those are not the actual IP addresses in use, but they are good examples

On the proxy: direct incoming traffic to the proxy

The firewall redirects all incoming port 80 and port 443 to the proxy. Those rules look like this:

rdr on re1 inet proto tcp from any to port = http  -> <SERPICO> round-robin
rdr on re1 inet proto tcp from any to port = https -> <SERPICO> round-robin

Where is my public IP address (as this is documentation, that IP is from the TEST-NET-2 IP address range).

Although it says round-robin, SERPICO resolves to a single IP address in the PF rules/aliases.

nginx configuration on the proxy

This section describes what the proxy does, and how.

nginx.conf contains this line:

[dan@serpico:/usr/local/etc/nginx] $ tail -2 nginx.conf
    include /usr/local/etc/nginx/includes/*.conf;

This pulls in all the *.conf files from the includes directory.

I like that only certain files are pulled in and not all files. This allows you to remove something from the configuration by renaming the file. Very simple. Very easy.

First website, on the proxy

This is the website which the users browser first hits. It looks like this:

[dan@serpico:/usr/local/etc/nginx/includes] $ cat
server {
  listen ssl http2;
  ssl_protocols TLSv1.2 TLSv1.1 TLSv1;


  error_log  /var/log/nginx/  info;
  access_log /var/log/nginx/ combined;

  ssl_certificate     /usr/local/etc/ssl/;
  ssl_certificate_key /usr/local/etc/ssl/;

  location  /  {


NOTE that the hostname (i.e. specified by server_name above does not resolve to an IP address on this webserver. Nor does it have to.

I want to stress this point because it took me a while to figure that one out and it is an important concept to understand.

Just because the resolves to another host does not mean this proxy can’t process incoming requests for that hostname. I think my failure to exploit that fact gave rise to my original solution which I will describe in a later section. It used an additional hostname and SSL certificate.

On the webserver: process the incoming connection

On the webserver, I have this nginx configuration:

[dan@devgit-nginx01:/usr/local/etc/nginx/includes] $ ls -l
total 1
lrwxr-xr-x  1 root  wheel  37 Aug  4 04:24 -> /usr/local/etc/freshports/vhosts.conf
lrwxr-xr-x  1 root  wheel  38 Nov 15 12:14 -> /usr/local/etc/freshsource/vhosts.conf

As you can see, I keep the configuration files over in /usr/local/etc/freshports.

What is inside?

[dan@devgit-nginx01:/usr/local/etc/nginx/includes] $ cat 
# As taken from
server {

  include "/usr/local/etc/freshsource/virtualhost-common.conf";

  return 301 https://$server_name$request_uri;

server {
  listen ssl http2;
  include "/usr/local/etc/freshsource/virtualhost-common.conf";
  include "/usr/local/etc/freshsource/virtualhost-common-ssl.conf";

  ssl_certificate     /usr/local/etc/ssl/;
  ssl_certificate_key /usr/local/etc/ssl/;

To avoid duplicating the same directives in multiple locations, I use virtualhost-common.conf for items which appear in both the :80 and :443 server sections.

Similarly, virtualhost-common-ssl.conf exists to keep the SSL directives all in one file and can be updated separately without affecting the rest of the configuration. In fact, I think that file is the same for all my SSL hosts. I could centralize that and just use one file for all vhosts, but I don’t.

Let’s look at the common directives:

[dan@devgit-nginx01:/usr/local/etc/nginx/includes] $ cat /usr/local/etc/freshsource/virtualhost-common.conf

  root          /usr/local/www/freshsource/www/;
  index         index.php index.html;

  error_log	/var/log/nginx/;
  access_log	/var/log/nginx/ combined;

  location ~ \.php$ {
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/var/run/php-fpm.sock;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $request_filename;
    include fastcgi_params;

Summary of the traffic

If you’re outside my home network:

  • resolves to a public IP address
  • pfSense on my firewall redirects the incoming request to my proxy server
  • serpico fetches the query from (which resolves to inside my network)
  • the webserver supplies the page
  • serpico passes the results back to the browser client

If I’m on my home network/VPN:

  • resolves to
  • the webserver supplies the page

An aside – my original solution – split-dns not required

I think this section is technically accurate but I’m getting sleepy as I write it and I’m not 100% sure.

When I first started writing this blog post, I was using this entry on the proxy:

  location  /  {
    proxy_set_header Host $http_host;

If proxy_set_header is not specified, we get the default website from the webserver. Specifying this parameter always got me the right website.

When I was using this solution, [at home] resolved to, the IP address of the proxy. I also had which pointed at, the webserver. The proxy would then redirect using that internal hostname.

I am not sure why I did it that way. I think it was because I wanted my internal usage to match that of external usage; both would use the proxy. As I was writing this, I changed the proxy entry to which points at the webserver.

It might also have been an approach I used before I started using split DNS. If you’re not using split DNS, then this approach will work for you.

In the old solution, I set proxy_set_header so that the website sees on the incoming headers, not

On the webserver, I’d also have the cert and keys for

The newly developed approach removes a DNS entry (for and a certificate for that hostname. However, even now as I look back at this, I see I could have done this without a separate certificate and just the hostname. The proxy_set_header directory passes through the hostname and you can use the certificate on both the proxy and the webserver. Strictly speaking, you don’t need a certificate on the webserver as you could redirect from the proxy to http:// not https://, but I prefer the latter.

Website Pin Facebook Twitter Myspace Friendfeed Technorati Digg Google StumbleUpon Premium Responsive

Leave a Comment

Scroll to Top