The Problem
Ever since my Raspberry Pi decided to release its magic smoke about a year ago, I've been running Pi-hole on my home servers for network-wide ad blocking and DNS. It works great, but there's always that nagging worry: what happens when I need to reboot the server?
Every time homelab server updates come around, I'm mentally preparing to push 1.1.1.1 via DHCP to keep the family online while I troubleshoot why the DNS server didn't come back up cleanly. There's nothing quite like the pressure of an entire household losing internet access because dad decided to tinker on a Sunday afternoon.
The Solution: High Availability DNS
This afternoon I finally tackled the problem properly by implementing high availability (HA) for my Pi-hole setup. The solution uses two Proxmox hosts running Pi-hole instances with keepalived handling automatic failover between them.
Here's how it works:
- Two Pi-hole instances running on separate Proxmox hosts, each more than capable of handling all DNS traffic
- Keepalived creates a virtual IP address that floats between the two servers
- Automatic health checking ensures the backup takes over within seconds if the primary fails
- Clients only need to know about one DNS server - the virtual IP handles the rest
Implementation Details
I used the Community Scripts Pi-hole LXC installer script to quickly deploy Pi-hole containers on both hosts. The script handles all the container setup and Pi-hole configuration, making deployment consistent and repeatable.
After getting both Pi-hole instances running and configured with the desired blocklists, I installed the keepalived
package on each container and configured them to share a virtual IP address.
The Keepalived Configuration
The magic happens in this surprisingly simple keepalived.conf:
vrrp_script dns_healthcheck {
script "/usr/bin/dig @127.0.0.1 pi.hole || exit 1" # Dig pi.hole, return 1 if failed
interval 2 # check every 2 seconds
fall 2 # require 2 failures for KO
rise 2 # require 2 successes for OK
}
vrrp_instance pihole {
state BACKUP # Default to backup (peer defaults to MASTER)
interface eth0
virtual_router_id 30
priority 150
advert_int 1
unicast_src_ip 192.168.1.22 # My IP
unicast_peer {
192.168.1.21 # Peer IP
}
authentication {
auth_type PASS
auth_pass <password> # put a password here
}
virtual_ipaddress {
192.168.1.2/24 # The Virtual IP Listener
}
track_script {
dns_healthcheck # Check script
}
}
How it works:
- My DNS clients point to
192.168.1.2
(the virtual IP) - The two Pi-hole containers live at
192.168.1.21
and192.168.1.22
- Keepalived runs health checks every 2 seconds using
dig pi.hole
- If the active server fails its health check twice, the backup automatically takes over the virtual IP
- Failover typically happens in under 5 seconds
Real-World Testing
I've been stress-testing this setup by randomly killing machines in various creative ways. The peer server consistently responds with "okay, I got it" and takes over seamlessly. At most, there's a few seconds of downtime - barely noticeable during normal browsing.
The same keepalived setup is also handling HA for my HAProxy frontends(including the one that routed the traffic for this blog post,) so I'm getting double value from learning this technology.
Why Did I Do This?
- For the family: No more internet outages during server maintenance
- For learning: Hands-on experience with enterprise HA networking
- For reliability: DNS is critical infrastructure - it should be redundant
Beyond Pi-hole
While this post focuses on DNS, the same keepalived approach works for any service that can run active/passive. I'm using it for web services, and it could easily extend to other critical home infrastructure like network monitoring or home automation hubs.
The configuration is simple enough to understand and maintain, but robust enough to provide enterprise-grade reliability. Plus, since it's all running on standard Linux containers, it's easy to backup, version control, and replicate.
Finally: No more Sunday afternoon internet outages. This homelab project is finally grown-up homeprod infrastructure.