Using DNS as a cheap failover and load-balancer

I’m currently testing the upcoming version of Mirrorbits with clustering support to be finally able to achieve high-availability for the VideoLAN downloads infrastructure.

We’re now running two servers for powering the downloads: and And today I’ve been struggling finding a simple and cheap way to distribute the load on the two servers while having some kind of failover.

The two servers are located in two different datacenters under the same AS but without physical access or service allowing us to add a load-balancer. We have a third server running in a third datacenter running the main website and few other services like a redis-sentinel. An option would have been to add a software load-balancer like HAProxy or Nginx on this machine but I didn’t want to add latencies or create a single point of failure.

Our infrastructure is quite stable, we only have a couple of issues each year so we don’t need to over-engineer our production infrastructure. So after deep thinking into the DNS architecture I was able to come up with another solution that is a bit hackish but well suited for us because it’s cheap, simple and resilient since we don’t have any SPOF compared to other more commonly used systems. Still, this solution isn’t perfect either, in case of a server failure some users may experience a downtime of up to one minute which falls well within our acceptable range.

DNS Delegation + Round-robin

We have three servers involved:

NS0 is delegating the zone to two other DNS servers, one running on each get instances (here dc2 and dc3). So we end up with one main DNS server and one for each server that you want to failover in a round-robin way.

; Assign name to the servers get.dc3 IN A get.dc2 IN A ; Delegate the get.v.o subdomain to each server get IN NS get.dc2 IN NS get.dc3

On the two download servers we setup a DNS server as well and declare the zone in their /etc/bind/named.conf.local.

zone "" { type master; file "/etc/bind/zones/"; };

The zone file is pretty simple, each server is returning its own IP address.

/etc/bind/zones/ on $ORIGIN $TTL 60 IN SOA ( 2015042902 ; serial 3600 ; refresh 3600 ; retry 3600000 ; expire 3600 ; minimum ) IN NS IN A

/etc/bind/zones/ on $ORIGIN $TTL 60 IN SOA ( 2015042902 ; serial 3600 ; refresh 3600 ; retry 3600000 ; expire 3600 ; minimum ) IN NS IN A

What it does achieve is that a DNS request to will return two different servers (DC2 and DC3), your web browser will then send a request to each of those and wait for the first one to answer. If one of them is down, the other one will obviously answer first and handle all the incoming requests. And voilà! You’ll get a cheap and simple load-balancing and failover on your infrastructure without any external component, hardware or service except the DNS protocol.

There’s few drawbacks though. Some ISP hosted DNS caches rewrite the TTL returned by the server because they think 60sec is too low and increase it to something much higher (usually around 15 minutes) and these may end up trying to reach the server that is currently down.

An other issue is that even if name resolution occurs quite fast (it’s UDP after all) it still needs some time to resolve all the indirections: ➝ ➝ and especially when a round-trip is around few hundreds milliseconds. For this specific situation I’ve found a small workaround which consists of adding a <link> tag into the <head> of the website allowing the web-browser to prefetch the DNS before actually needing it.

For instance, this line is present on all pages of the website because most of the visitors will eventually download one of the projects during their visit.

The last problem I’ve identified is that if your webserver is stopped or is unavailable for any reason the incoming requests will still be dispatched to this server until you manually stop the DNS server. But this could easily be resolved using a small script or a monitoring system.


I’m no expert in DNSSEC so I don’t know if this setup will eventually break it or not. If anyone has enough knowledge, please share your opinion in comments.


I’ve never saw something similar used in production at such a scale so you should probably take this with a grain of salt. It’s only been a day since it went into production and even if I got no report of any problem so far, we’ll need a few more weeks before considering this solution as perfectly working. I will try to keep this post up-to-date if I ever decide to continue the experiment or on the other hand, switch to another, more conventional system.


comments powered by Disqus