Docker Services
TL;DR: Docker lets me run dozens of different applications—development tools, monitoring systems, home automation services—all isolated from each other on a single computer. Each application runs in its own “container” and can be updated, restarted, or replaced without affecting the others. It’s like having many small computers instead of one big messy one.
Docker is my go-to for hosting services in my homelab. I run a dedicated VM on Proxmox that hosts dozens of containerized services—everything from development tools to network utilities to home automation backends. This environment has been instrumental in shaping how I think about microservice architecture.
Why Docker (and Not Kubernetes)
I haven’t yet justified deploying a Kubernetes cluster on my relatively meager hardware. Kubernetes adds operational overhead that makes sense at scale but is overkill for a single-node homelab. Docker Compose gives me 90% of the benefits—declarative configuration, easy updates, network isolation—without the complexity of managing etcd, control planes, and worker nodes.
The tradeoff is that I don’t get Kubernetes-native features like automatic failover, rolling deployments, or horizontal pod autoscaling. For a homelab where I’m the only user and downtime is acceptable, that tradeoff is fine.
What I Run
A sampling of services that run in this environment:
- Development: Code-server, container registries, build tools
- Network: Reverse proxies, DNS, VPN endpoints
- Monitoring: Prometheus, Grafana, container metrics
- Home automation: Supporting services for Home Assistant
Everything is defined in Docker Compose files, version-controlled, and backed up. Rebuilding the entire environment from scratch takes under an hour.
What I’ve Learned
What Works Well in Containers
- Stateless services that can be stopped and restarted without data loss
- Services with simple storage requirements (bind mounts to the host, or small volumes)
- Horizontally scalable workloads where running multiple instances is straightforward
- Isolated environments where services shouldn’t see each other’s filesystems
What Doesn’t Work Well
- Stateful databases with complex storage semantics—these need careful volume management and backup strategies
- Services with high I/O requirements where container networking adds latency
- Anything requiring low-level hardware access (though Docker has improved here)
- Tightly coupled services that share state in ways that assume co-location
These lessons directly inform my production architecture decisions. When someone proposes containerizing a stateful service, I have intuition about the pitfalls because I’ve experienced them at home.
The Learning Lab
This Docker environment served as the bedrock of much of what I do in production today. Before I architected multi-tenant SIEM pipelines or Kubernetes platforms professionally, I was experimenting with container networking, service discovery, and data persistence in my homelab.
The scale is different, but the concepts transfer: understanding how containers share resources, how networks are isolated, how volumes persist data, how services discover each other—all of this I learned by running Docker at home before applying it at work.
Related
- Proxmox VE — The hypervisor running this Docker VM
- Homelab Overview — The broader homelab context
- On-Premise Kubernetes Platform — Where these container concepts scale up
- Infrastructure Tools — Docker in my professional toolkit