Running Docker on a VPS is easy — almost any KVM-based VPS handles it. Running Kubernetes (or even a single-node k3s cluster) is where things get interesting. Most cheap VPS hosts technically support containers but quietly fail at the things K8s actually needs: nested virtualization, real CPU isolation, predictable I/O, and bandwidth that doesn't melt under image pulls. This guide explains what actually matters, what specs you need at each scale, and the honest trade-offs between single-node Docker hosts, k3s setups, and production K8s clusters.

🐳

TL;DR: For Docker workloads, any KVM VPS with NVMe storage works (≥2GB RAM, dedicated vCPU). For Kubernetes, you need KVM (not OpenVZ), at least 4GB RAM per node, and a host that supports nested virtualization if you want to run Minikube/KIND. OliveVPS Pro ($7.99/mo) handles single-node Docker comfortably; multi-node K8s starts at Premium ($15.99/mo) or dedicated.

What we'll cover

  1. What actually matters in a Docker/K8s VPS
  2. Why KVM is non-negotiable
  3. Specs by workload (Docker vs k3s vs full K8s)
  4. Storage: NVMe + the etcd problem
  5. Networking & ingress
  6. Honest provider comparison
  7. Where OliveVPS fits
  8. Common mistakes
  9. FAQ

What actually matters in a Docker/K8s VPS

Five things, in rough order of importance for container workloads:

  1. KVM virtualization — full hardware virtualization, not OpenVZ containers. K8s needs its own kernel-level features; OpenVZ shares the host kernel and breaks half of them.
  2. Dedicated CPU cores — not oversubscribed vCPU. Container scheduling assumes the CPU it sees is the CPU it gets.
  3. NVMe storage — etcd, container layer extraction, and image registries all hit disk hard. Spinning rust or even SATA SSD will bottleneck.
  4. Generous RAM — every container ships with overhead. K8s control plane alone wants ~1GB before you run anything useful.
  5. Real bandwidth — pulling a 2GB Docker image over a throttled 100Mbps shared link is suffering. NVMe-fast disk doesn't help if the network is the bottleneck.

Why KVM is non-negotiable

OpenVZ shares the host kernel with every other VPS on the box. That has cascading consequences for containerized workloads:

If a host markets a $2/mo "VPS" — it's almost certainly OpenVZ. Run away. Our KVM vs OpenVZ guide goes deeper into the technical differences.

Specs by workload

Single-node Docker (a few containers)

Running 3-5 containers on docker-compose? A small VPS works fine. Minimum: 1 vCPU, 2GB RAM, 30GB NVMe. Comfortable: 2 vCPU, 4GB RAM, 60GB NVMe. This handles a typical self-hosted stack — reverse proxy, app, database, monitoring sidecar — with headroom.

Single-node Kubernetes (k3s)

k3s is K8s with the cruft removed. It runs comfortably on one VPS. Minimum: 2 vCPU, 4GB RAM, 60GB NVMe. Comfortable: 4 vCPU, 8GB RAM, 100GB NVMe. The k3s control plane plus a handful of workloads (web app, postgres, redis, cert-manager, ingress controller) settles around 2-3GB RAM steady-state.

Multi-node K8s cluster

If you're running real K8s (kubeadm, RKE2, or managed K8s on your own VMs), you want at least 3 nodes for HA. Per node: 4 vCPU, 8-16GB RAM, 100GB+ NVMe. Control plane nodes can be smaller (2 vCPU / 4GB) if you're not running workloads on them. Production clusters often run 6-12 nodes; the math gets expensive fast and is where dedicated servers start beating VPS pricing.

ScalevCPURAMDiskOliveVPS plan
Docker (3-5 containers)1-22-4 GB30-60 GBStarter / Pro
k3s single node2-44-8 GB60-100 GBPro / Premium
K8s control plane2-44-8 GB60 GBPro / Premium
K8s worker (medium)4-88-16 GB100-200 GBPremium / Dedicated
K8s worker (heavy)8-1632-64 GB200-500 GBDedicated

Storage: NVMe + the etcd problem

etcd — Kubernetes' distributed key-value store — is fsync-heavy. Every write hits disk synchronously. On spinning disks or shared SATA SSD, etcd starts slowing down at modest cluster sizes, and a slow etcd means slow control plane operations: pod scheduling lags, kubectl commands time out, the whole cluster feels sluggish.

NVMe with low fsync latency (under 1ms) is what you want. Avoid network-attached block storage (think AWS EBS or Hetzner cloud volumes) for etcd if you can — it's slower and the latency variance can cause cluster instability. Local NVMe wins.

For container image storage and persistent volumes, NVMe also matters because image extraction is I/O bound. A 1GB image extracts in 2-3 seconds on NVMe vs 15-25 seconds on SATA SSD. When you're rolling a deployment that pulls a fresh image to 10 nodes, this adds up. More on NVMe →

Networking & ingress

Container networking has its own quirks on a VPS:

Honest provider comparison for Docker/K8s

ProviderTierDockerK8s (k3s)Notes
AWS Lightsail$10+/moWorksCramped at low tiersEgress fees punishing for image pulls
DigitalOcean$6+/moGoodGoodManaged K8s available, expensive at scale
Linode$5+/moGoodGoodSolid LKE managed K8s offering
Hetzner€4+/moExcellentExcellentBest price/performance — EU-only
Vultr$5+/moGoodGoodVKE managed K8s, more regions
OliveVPS$3.99+/moExcellentExcellentKVM, NVMe, dedicated CPU on every plan
OpenVZ hosts ($1-3/mo)CheapLimitedBrokenAvoid for any serious container work

Where OliveVPS fits

We're not the cheapest (Hetzner usually wins on raw price/RAM), but we're competitive and we don't have the gotchas. KVM on every plan, NVMe on every plan, dedicated cores on every plan, no egress overage fees, IPv6 included, SSH key auth on first boot, Ubuntu/Debian/Rocky/Alma images ready in under a minute. Nested virtualization is enabled where the underlying hardware supports it (most regions).

Practical recommendations:

Common mistakes

Buying a $2 VPS for K8s. It's OpenVZ. It will not work properly. The savings vanish into the time you'll spend debugging.

Skimping on RAM. 1GB is not enough for K8s. The control plane plus a couple of pods will OOM. 4GB minimum, 8GB if you want to actually use it.

Running etcd on shared SATA SSD. Performance is fine until you hit a few hundred objects, then it falls off a cliff. NVMe from day one.

Not setting resource limits in pod specs. One runaway container will OOM-kill the entire node. Always set requests and limits on memory.

Exposing the K8s API publicly. Lock kubectl access behind a VPN or SSH tunnel. The API is a juicy target.

Container-ready VPS from $3.99/mo

KVM virtualization, NVMe storage, dedicated CPU cores, IPv6 included. Docker installs in 30 seconds, k3s in two minutes. Nested virtualization for KIND/Minikube available on most regions.

See VPS Plans →

FAQ

Can I run Kubernetes on a $5 VPS?

k3s — yes, just barely. Real K8s with kubeadm — no, not enough RAM. The control plane alone wants 1GB+ and you need room for actual workloads. Plan on 4-8GB minimum for any K8s setup worth running.

Docker vs Podman — which is better on a VPS?

Functionally similar for most workloads. Podman is daemonless and runs rootless by default (security win), but Docker has broader tooling support and most documentation assumes Docker. Pick Podman if you care about rootless; pick Docker if you want frictionless tooling. Both work fine on KVM VPS.

Should I use managed K8s (EKS/GKE/LKE) or self-host?

Managed K8s saves you from running the control plane (a real win operationally) but costs $70+/mo before you run anything. Self-hosted k3s on a $16 VPS works for personal/small-team use. The break-even is around when you have 3+ engineers maintaining the cluster.

Do I need a load balancer for K8s on a single VPS?

No. Use an ingress controller (nginx-ingress, Traefik) listening on ports 80/443 directly. A load balancer adds a hop and only matters when you have multiple ingress nodes.

How do I back up a K8s cluster?

Three layers: (1) etcd snapshots (built into k3s and kubeadm), (2) persistent volume backups using Velero or restic, (3) GitOps for cluster state — manifests live in git, so you can rebuild the cluster from scratch. Together these get you proper disaster recovery.

🐱
The OliveVPS Team

We run our own internal tooling on Kubernetes. We've made every mistake on this list at least once. The advice here is what we wish someone had told us in 2019.