Running Docker on a VPS is easy — almost any KVM-based VPS handles it. Running Kubernetes (or even a single-node k3s cluster) is where things get interesting. Most cheap VPS hosts technically support containers but quietly fail at the things K8s actually needs: nested virtualization, real CPU isolation, predictable I/O, and bandwidth that doesn't melt under image pulls. This guide explains what actually matters, what specs you need at each scale, and the honest trade-offs between single-node Docker hosts, k3s setups, and production K8s clusters.
TL;DR: For Docker workloads, any KVM VPS with NVMe storage works (≥2GB RAM, dedicated vCPU). For Kubernetes, you need KVM (not OpenVZ), at least 4GB RAM per node, and a host that supports nested virtualization if you want to run Minikube/KIND. OliveVPS Pro ($7.99/mo) handles single-node Docker comfortably; multi-node K8s starts at Premium ($15.99/mo) or dedicated.
What we'll cover
What actually matters in a Docker/K8s VPS
Five things, in rough order of importance for container workloads:
- KVM virtualization — full hardware virtualization, not OpenVZ containers. K8s needs its own kernel-level features; OpenVZ shares the host kernel and breaks half of them.
- Dedicated CPU cores — not oversubscribed vCPU. Container scheduling assumes the CPU it sees is the CPU it gets.
- NVMe storage — etcd, container layer extraction, and image registries all hit disk hard. Spinning rust or even SATA SSD will bottleneck.
- Generous RAM — every container ships with overhead. K8s control plane alone wants ~1GB before you run anything useful.
- Real bandwidth — pulling a 2GB Docker image over a throttled 100Mbps shared link is suffering. NVMe-fast disk doesn't help if the network is the bottleneck.
Why KVM is non-negotiable
OpenVZ shares the host kernel with every other VPS on the box. That has cascading consequences for containerized workloads:
- You can't load custom kernel modules. Want WireGuard's in-kernel module for container networking? OverlayFS for Docker storage drivers? Kernel-side eBPF for Cilium? None of it works on OpenVZ.
- cgroups behave weirdly. Docker uses cgroups for resource limits. On OpenVZ, the cgroup hierarchy is shared with the host, so memory and CPU limits inside containers are unreliable.
- Nested virtualization is impossible. KIND, Minikube, k3d, or anything that runs K8s nodes as VMs requires nested KVM. OpenVZ can't do this.
If a host markets a $2/mo "VPS" — it's almost certainly OpenVZ. Run away. Our KVM vs OpenVZ guide goes deeper into the technical differences.
Specs by workload
Single-node Docker (a few containers)
Running 3-5 containers on docker-compose? A small VPS works fine. Minimum: 1 vCPU, 2GB RAM, 30GB NVMe. Comfortable: 2 vCPU, 4GB RAM, 60GB NVMe. This handles a typical self-hosted stack — reverse proxy, app, database, monitoring sidecar — with headroom.
Single-node Kubernetes (k3s)
k3s is K8s with the cruft removed. It runs comfortably on one VPS. Minimum: 2 vCPU, 4GB RAM, 60GB NVMe. Comfortable: 4 vCPU, 8GB RAM, 100GB NVMe. The k3s control plane plus a handful of workloads (web app, postgres, redis, cert-manager, ingress controller) settles around 2-3GB RAM steady-state.
Multi-node K8s cluster
If you're running real K8s (kubeadm, RKE2, or managed K8s on your own VMs), you want at least 3 nodes for HA. Per node: 4 vCPU, 8-16GB RAM, 100GB+ NVMe. Control plane nodes can be smaller (2 vCPU / 4GB) if you're not running workloads on them. Production clusters often run 6-12 nodes; the math gets expensive fast and is where dedicated servers start beating VPS pricing.
| Scale | vCPU | RAM | Disk | OliveVPS plan |
|---|---|---|---|---|
| Docker (3-5 containers) | 1-2 | 2-4 GB | 30-60 GB | Starter / Pro |
| k3s single node | 2-4 | 4-8 GB | 60-100 GB | Pro / Premium |
| K8s control plane | 2-4 | 4-8 GB | 60 GB | Pro / Premium |
| K8s worker (medium) | 4-8 | 8-16 GB | 100-200 GB | Premium / Dedicated |
| K8s worker (heavy) | 8-16 | 32-64 GB | 200-500 GB | Dedicated |
Storage: NVMe + the etcd problem
etcd — Kubernetes' distributed key-value store — is fsync-heavy. Every write hits disk synchronously. On spinning disks or shared SATA SSD, etcd starts slowing down at modest cluster sizes, and a slow etcd means slow control plane operations: pod scheduling lags, kubectl commands time out, the whole cluster feels sluggish.
NVMe with low fsync latency (under 1ms) is what you want. Avoid network-attached block storage (think AWS EBS or Hetzner cloud volumes) for etcd if you can — it's slower and the latency variance can cause cluster instability. Local NVMe wins.
For container image storage and persistent volumes, NVMe also matters because image extraction is I/O bound. A 1GB image extracts in 2-3 seconds on NVMe vs 15-25 seconds on SATA SSD. When you're rolling a deployment that pulls a fresh image to 10 nodes, this adds up. More on NVMe →
Networking & ingress
Container networking has its own quirks on a VPS:
- Public IPv4 is precious. If you want each service to have its own IP, you'll burn IPs fast. A single VPS with one IP plus an ingress controller (nginx-ingress, Traefik) routes everything by hostname/path. That's the standard pattern.
- IPv6 is your friend. Most VPS providers give a /64 or larger by default — that's enough IPv6 addresses to assign one per container if you want. OliveVPS includes IPv6 on every plan.
- Watch egress. Pulling images from public registries doesn't usually count against quotas, but if you're running a public-facing app behind your VPS, monitor egress closely. Egress explained →
Honest provider comparison for Docker/K8s
| Provider | Tier | Docker | K8s (k3s) | Notes |
|---|---|---|---|---|
| AWS Lightsail | $10+/mo | Works | Cramped at low tiers | Egress fees punishing for image pulls |
| DigitalOcean | $6+/mo | Good | Good | Managed K8s available, expensive at scale |
| Linode | $5+/mo | Good | Good | Solid LKE managed K8s offering |
| Hetzner | €4+/mo | Excellent | Excellent | Best price/performance — EU-only |
| Vultr | $5+/mo | Good | Good | VKE managed K8s, more regions |
| OliveVPS | $3.99+/mo | Excellent | Excellent | KVM, NVMe, dedicated CPU on every plan |
| OpenVZ hosts ($1-3/mo) | Cheap | Limited | Broken | Avoid for any serious container work |
Where OliveVPS fits
We're not the cheapest (Hetzner usually wins on raw price/RAM), but we're competitive and we don't have the gotchas. KVM on every plan, NVMe on every plan, dedicated cores on every plan, no egress overage fees, IPv6 included, SSH key auth on first boot, Ubuntu/Debian/Rocky/Alma images ready in under a minute. Nested virtualization is enabled where the underlying hardware supports it (most regions).
Practical recommendations:
- Personal Docker host for your homelab projects, internal tools, side projects: Pro at $7.99/mo (2 vCPU, 4GB, 80GB NVMe).
- Single-node k3s for self-hosted services with proper ingress and cert-manager: Premium at $15.99/mo (4 vCPU, 8GB, 160GB NVMe).
- Multi-node K8s for actual production: 3× Premium for HA, or dedicated servers if you're running heavier workloads. Dedicated options →
Common mistakes
Buying a $2 VPS for K8s. It's OpenVZ. It will not work properly. The savings vanish into the time you'll spend debugging.
Skimping on RAM. 1GB is not enough for K8s. The control plane plus a couple of pods will OOM. 4GB minimum, 8GB if you want to actually use it.
Running etcd on shared SATA SSD. Performance is fine until you hit a few hundred objects, then it falls off a cliff. NVMe from day one.
Not setting resource limits in pod specs. One runaway container will OOM-kill the entire node. Always set requests and limits on memory.
Exposing the K8s API publicly. Lock kubectl access behind a VPN or SSH tunnel. The API is a juicy target.
Container-ready VPS from $3.99/mo
KVM virtualization, NVMe storage, dedicated CPU cores, IPv6 included. Docker installs in 30 seconds, k3s in two minutes. Nested virtualization for KIND/Minikube available on most regions.
See VPS Plans →FAQ
Can I run Kubernetes on a $5 VPS?
k3s — yes, just barely. Real K8s with kubeadm — no, not enough RAM. The control plane alone wants 1GB+ and you need room for actual workloads. Plan on 4-8GB minimum for any K8s setup worth running.
Docker vs Podman — which is better on a VPS?
Functionally similar for most workloads. Podman is daemonless and runs rootless by default (security win), but Docker has broader tooling support and most documentation assumes Docker. Pick Podman if you care about rootless; pick Docker if you want frictionless tooling. Both work fine on KVM VPS.
Should I use managed K8s (EKS/GKE/LKE) or self-host?
Managed K8s saves you from running the control plane (a real win operationally) but costs $70+/mo before you run anything. Self-hosted k3s on a $16 VPS works for personal/small-team use. The break-even is around when you have 3+ engineers maintaining the cluster.
Do I need a load balancer for K8s on a single VPS?
No. Use an ingress controller (nginx-ingress, Traefik) listening on ports 80/443 directly. A load balancer adds a hop and only matters when you have multiple ingress nodes.
How do I back up a K8s cluster?
Three layers: (1) etcd snapshots (built into k3s and kubeadm), (2) persistent volume backups using Velero or restic, (3) GitOps for cluster state — manifests live in git, so you can rebuild the cluster from scratch. Together these get you proper disaster recovery.