Most VPS shoppers know the price, the RAM, and the disk. Far fewer know what virtualization technology actually runs their VPS — and that's a problem because it determines whether your server can run Docker, swap memory, install a custom kernel, run WireGuard properly, or behave like a real Linux box. Three technologies dominate the VPS market: KVM, OpenVZ, and LXC. They look identical from the outside (you SSH in, you get Linux), but they're radically different underneath. This guide explains what each one is, what works and breaks on each, and why OliveVPS uses KVM exclusively.
TL;DR: KVM is real virtualization — your VPS gets its own kernel, can run anything. OpenVZ is container-based virtualization — shares the host kernel, breaks Docker, WireGuard, custom kernels, and many modern tools. LXC is similar to OpenVZ in nature but more modern. For 2026 workloads, KVM is the only one you should be buying.
What we'll cover
KVM (Kernel-based Virtual Machine)
KVM is full hardware virtualization built into the Linux kernel. The hypervisor runs as part of the host Linux kernel, and each VPS is a complete virtual machine: its own kernel, its own memory, its own virtualized devices, its own everything. From inside a KVM VPS, you can't tell you're virtualized except by checking /proc/cpuinfo for hypervisor flags.
What this means in practice:
- You can run any Linux distribution and any kernel version
- You can install custom kernel modules
- You can swap memory (real swap, not just pages-to-disk)
- Docker, Podman, Kubernetes work normally
- WireGuard works (kernel module loads correctly)
- You can run other virtualization or VMs nested inside (with nested virt enabled)
- You can boot from custom ISOs and run non-Linux OSes (Windows, FreeBSD, OpenBSD)
The trade-off: more overhead. A KVM VPS uses a few hundred MB of RAM for the kernel and virtualization layer that an OpenVZ container doesn't need. But on modern hardware this overhead is small, and the modern tools that don't work on OpenVZ make KVM the obvious default.
OliveVPS uses KVM exclusively. So do most reputable VPS providers in 2026 — DigitalOcean, Vultr, Linode, Hetzner Cloud, AWS Lightsail.
OpenVZ
OpenVZ is operating-system-level virtualization. The host runs a single Linux kernel, and each "VPS" is actually a container — an isolated namespace within the host kernel. From inside the container, you see a private filesystem, private process tree, private network stack — but you're sharing the kernel with the host and every other container on the box.
This was great in 2010. Container overhead is near-zero, you could pack more "VPS" instances onto a single physical box, and at the time the limitations didn't matter much. In 2026, the limitations matter a lot:
- Cannot install custom kernels. The host kernel is the kernel; you don't get your own.
- Cannot install kernel modules. Whatever the host has loaded is what you get.
- Docker barely works. Some OpenVZ hosts have hacked together Docker support but it's brittle and many features (overlay filesystems, certain network modes) don't work right.
- WireGuard requires kernel module support. If the OpenVZ host kernel doesn't have it, you can't use WireGuard. Some hosts have it, many don't.
- Swap is fake or limited. OpenVZ "vSwap" simulates swap by burst-killing memory pages. Real swap requires a real kernel.
- Memory accounting is weird. The kernel's view of available memory is shared across containers, leading to confusing OOM situations.
- iptables / nftables limitations. Some firewall rules and table types don't work in containers.
- Often heavily oversold. Because containers are so cheap, many OpenVZ providers cram dozens of "VPS" onto a box meant for 5-10 KVM instances. Performance is variable.
Why does OpenVZ still exist? Because the per-instance cost is much lower for the host. You'll see OpenVZ on the cheapest tiers of certain low-cost VPS providers — $1-2/month plans where the only way the math works is high oversubscription on shared kernel.
LXC (Linux Containers)
LXC is the modern, mainstream evolution of OS-level virtualization on Linux. Same fundamental architecture as OpenVZ — containers sharing a kernel, namespaces and cgroups for isolation — but built on standard upstream Linux kernel features rather than out-of-tree patches like OpenVZ historically required.
You'll see LXC used in two main contexts:
- Proxmox VE uses LXC alongside KVM as a "containers-on-host" option for users who want low-overhead Linux containers on their own infrastructure.
- Some VPS providers use LXC as a budget tier — between OpenVZ and KVM in features and price.
LXC has many of the same limitations as OpenVZ: shared kernel, no custom kernel modules, Docker compatibility issues (though better than OpenVZ), kernel-feature constraints. It's a step up from OpenVZ in modern tooling and standardization but a step down from KVM in capability.
For a self-managed homelab, LXC + KVM mixed (Proxmox-style) makes a lot of sense. For a VPS purchase decision, LXC has the same disadvantages as OpenVZ relative to KVM.
Side-by-side comparison
| Capability | KVM | OpenVZ | LXC |
|---|---|---|---|
| Virtualization type | Hardware virtualization | OS-level container | OS-level container |
| Own kernel | ✅ Yes | ❌ No | ❌ No |
| Custom kernel modules | ✅ Yes | ❌ No | ❌ No |
| Docker / Kubernetes | ✅ Native | ⚠️ Partial / hacky | ⚠️ Partial |
| WireGuard | ✅ Native | ⚠️ Host-dependent | ⚠️ Host-dependent |
| Real swap | ✅ Yes | ❌ vSwap simulation | ⚠️ Limited |
| iptables / nftables | ✅ Full | ⚠️ Limited | ⚠️ Mostly works |
| Custom OS (Windows, BSD) | ✅ Yes | ❌ Linux only | ❌ Linux only |
| Memory overhead | ~100-300 MB | ~5-20 MB | ~10-30 MB |
| Typical use in VPS market | Mainstream / premium | Budget / cheap | Niche / mid-tier |
What breaks on OpenVZ/LXC that works on KVM
If you're considering a budget OpenVZ VPS, here are the specific things you'll likely hit:
Docker
Most OpenVZ hosts can't run Docker at all. Some have it patched in, but you'll hit issues with:
- Overlay2 storage driver (often falls back to slow vfs)
- Network modes (host networking, custom bridges)
- cgroup v2 features that newer Docker versions assume
- Seccomp profiles
WireGuard
WireGuard is implemented as a Linux kernel module. If the OpenVZ host doesn't have the WireGuard module loaded, you can't use WireGuard at all from your VPS. Userspace alternatives (wireguard-go) exist but are slower and less polished.
Custom kernel features
If a kernel module or feature you need isn't enabled on the OpenVZ host kernel, you're stuck. Common things that bite people: BBR congestion control, certain network namespaces, some tunneling protocols, eBPF features, newer io_uring support.
Heavy database workloads
Postgres and MySQL configurations sometimes assume swap behavior, transparent huge pages, or specific memory accounting that doesn't match how OpenVZ handles things. Tuning a database for OpenVZ is doable but more annoying than tuning for KVM.
VPN servers
OpenVPN works on most OpenVZ; WireGuard often doesn't. IPSec / strongSwan can hit issues depending on host kernel. KVM removes all these concerns.
Game servers (some)
Most game servers don't care, but some — particularly ones using anti-cheat systems that examine kernel features, or ones that need specific networking — hit edge cases on container-based hosting.
KVM only, every plan, every region
OliveVPS uses KVM virtualization on every server. Real kernel, real swap, real Docker. None of the "cheap VPS" gotchas. Starting at $3.99/mo.
See VPS Plans →How to tell what your VPS uses
From inside a Linux VPS, you can check what's underneath:
# Most reliable: virt-what (often needs install)
sudo apt install virt-what
sudo virt-what
# Outputs: kvm (or 'openvz', 'lxc', 'xen', etc.)
# Check via systemd-detect-virt (if systemd present)
systemd-detect-virt
# Outputs: kvm / openvz / lxc / none
# Check /proc/cpuinfo for KVM hypervisor flag
grep -i hypervisor /proc/cpuinfo
# If 'hypervisor' flag present: virtualized (likely KVM)
# Check /proc/user_beancounters — exists on OpenVZ
ls /proc/user_beancounters 2>/dev/null && echo "OpenVZ" || echo "Not OpenVZ"
# Check kernel version vs distro
uname -r
# On OpenVZ you'll often see a kernel that doesn't match your distro
# (e.g. CentOS 7-style kernel on Ubuntu 22 — that's an OpenVZ container)
If you're shopping for a VPS, ask the provider directly. Reputable providers state their virtualization type clearly. If they're vague about it, that's usually a sign of OpenVZ.
Why OpenVZ still exists in 2026
The economics: a single physical server might host 8-15 KVM VPS instances at typical specs, but 30-60 OpenVZ containers. For a provider running on tight margins, OpenVZ allows much lower retail prices. The $1-2/month VPS market exists because OpenVZ enables it.
For some workloads — a tiny static site, a single-purpose IRC bouncer, a basic SOCKS proxy — OpenVZ is genuinely fine. You don't need Docker, you don't need a custom kernel, you just need a Linux box with a public IP and minimal RAM. For those use cases, $1.50/month OpenVZ saves real money.
For anything more substantial — modern application stacks, anything with Docker, anything self-hosted, anything you want to run reliably — KVM is worth the extra few dollars.
FAQ
Is KVM significantly slower than OpenVZ?
For most workloads no. The historical "KVM is slower" argument was true 10 years ago when virtualization extensions were less mature. On modern hardware (post-2018), KVM overhead is single-digit percent vs bare metal. The advantages massively outweigh the overhead.
Can I run Docker on an OpenVZ VPS?
Sometimes. Some OpenVZ providers patch in Docker support; many don't. Even when it works, you'll often hit limitations on overlay storage, networking, or cgroups. Don't buy OpenVZ for Docker workloads.
Why do OpenVZ providers list more RAM for the same price?
Because OpenVZ containers can be aggressively oversold. The "8GB RAM" on an OpenVZ VPS often means burst-up-to-8GB-when-the-host-isn't-busy, not 8GB dedicated. Real available memory is frequently much lower. KVM RAM is allocated and dedicated.
Does OliveVPS support nested virtualization?
Nested virt is enabled on dedicated CPU plans (Pro and above). This means you can run KVM VMs, Vagrant boxes, or Kubernetes clusters using kvm2 driver on your VPS. Useful for development and testing environments.
What about Xen virtualization?
Xen is also full hardware virtualization (similar capability to KVM) and used to be common, especially on AWS EC2 historically. Modern usage is mostly displaced by KVM. AWS, for example, transitioned EC2 from Xen to their own KVM-based Nitro hypervisor. From a customer perspective, Xen and KVM are roughly equivalent in capability.