Edge Proxy and Load Balancing
TL;DR: The “front door” for all our web services—the system that receives every incoming request from the internet and routes it to the right place. If one server fails, another takes over automatically within seconds, and users never notice. Runs entirely on our own hardware instead of renting load balancers from cloud providers.
Problem
I needed high availability and secure routing at the edge without relying on cloud load balancers. A single HAProxy instance would work, but it’s a single point of failure—if that box dies, everything behind it becomes unreachable. Failover had to be automatic and fast (seconds, not minutes), and I only wanted HTTPS exposed publicly.
How It Works
The edge layer runs as a pair of nodes behind a virtual IP managed by Keepalived (VRRP). If the primary node fails health checks, the backup takes over the VIP within 3–5 seconds. Clients keep connecting to the same IP and never know failover happened. VRRP is simple, well-understood, and doesn’t require external coordination—the constraint is that both nodes need to be on the same L2 network segment.
I use Nginx for TLS termination and HAProxy for load balancing and routing because each tool excels at its respective job. Only HTTPS is exposed publicly; HTTP is open solely for Let’s Encrypt HTTP-01 certificate challenges. For services that need TLS passthrough rather than edge termination, I route based on SNI (Server Name Indication), which lets me multiplex multiple services on port 443 by hostname without exposing additional ports.
Health checks verify actual service availability—not just port responsiveness—so unhealthy backends are removed from rotation quickly. If some backends fail, traffic concentrates on the remaining healthy ones and alerts fire for capacity concerns.
All edge traffic is logged and forwarded to the central log pipeline for audit trails and incident investigation. Configuration is managed through Ansible so changes are versioned and reproducible.
flowchart TB Internet["Internet / WAN"] VIP["Virtual IP (VRRP)<br/>- Keepalived manages failover<br/>- Primary/backup node pair"] Edge["Edge Proxy (HAProxy + Nginx)<br/>- Nginx: TLS termination for HTTPS<br/>- HAProxy: Load balancing, routing<br/>- HTTPS-only ingress<br/>- SNI routing for TLS passthrough<br/>- Health checks for backends"] Backend["Backend Services<br/>- Kubernetes ingress<br/>- Standalone services"] Internet --> VIP --> Edge --> Backend
Outcome
The edge layer handles node failures and traffic spikes without manual intervention. Public exposure is limited to HTTPS with a minimal exception for certificate challenges. It’s “boring” in the best sense—it does its job without requiring constant attention, and when something does happen, the logs show exactly what traffic hit the infrastructure and when.
Technologies
HAProxy, Keepalived (VRRP), Nginx, WireGuard (for secure tunnels), Ansible (configuration management), cert-manager (certificates).
Related
- On-Premise Kubernetes Platform — One of the primary backends this edge layer fronts
- Distributed Wazuh SIEM Platform — Security monitoring that includes edge traffic