You'll see "NVMe" listed as a feature on most VPS plans worth buying in 2026 — including all of ours. Most people read it, mentally tag it as "fast storage," and move on. That's basically right, but the actual difference between NVMe and the SATA SSDs many providers still ship is bigger than the marketing suggests, and it shows up in real ways: a database query that hits disk runs 4-6x faster, a Docker image extraction that takes 30 seconds becomes 6 seconds, a Postgres restore from backup that took an hour finishes in 12 minutes. This guide explains what NVMe actually is, why it's faster, and where the difference matters in practice.
TL;DR: NVMe is a faster way for the CPU to talk to flash storage. It bypasses the SATA bus entirely and connects directly to the PCIe bus, eliminating multiple layers of legacy overhead. The result: 5-7x more IOPS, 3-5x lower latency, and noticeably faster real-world workloads. OliveVPS uses NVMe by default on every plan.
What we'll cover
What NVMe actually is
NVMe stands for Non-Volatile Memory Express. It's a communication protocol designed specifically for flash storage, sitting on top of the PCIe (Peripheral Component Interconnect Express) bus that already connects your CPU to graphics cards, network cards, and other high-speed peripherals.
The key insight: when SSDs first appeared, they used the SATA interface designed for spinning hard drives. SATA was built around the assumption of slow mechanical media — single-threaded, command-queuing limits, lots of legacy overhead. Putting flash storage on SATA worked, but the interface itself became the bottleneck. NVMe was designed from scratch for flash: parallel command queues, low overhead, direct PCIe connection.
Physical form factors of NVMe drives include M.2 (small stick), U.2 (2.5-inch), and EDSFF (rack-server form factors like E1.S and E3.S). The form factor doesn't change performance — what matters is that the drive speaks NVMe over PCIe rather than SATA over the SATA bus.
Why SATA SSDs are slow (relatively)
SATA SSDs aren't slow in absolute terms — compared to a spinning hard drive they're 50-100x faster. But compared to NVMe they're meaningfully limited:
- SATA III bandwidth caps at 600 MB/s theoretical, ~550 MB/s in practice. NVMe drives saturate PCIe 4.0 lanes at 7,000+ MB/s and PCIe 5.0 at 14,000+ MB/s.
- SATA has a single command queue with 32 commands max. NVMe supports 64,000 queues, each with 64,000 commands. For multi-threaded workloads (databases, busy webservers, virtualization hosts) this matters enormously.
- SATA has higher per-operation latency. The SATA stack adds microseconds of overhead per I/O. NVMe is closer to the metal.
- SATA was designed for HDDs. The whole stack — drivers, OS scheduling, command sets — has historical baggage.
None of this matters much for sequential reads of large files (both are bandwidth-limited and similar in practice). It matters a lot for the random small I/O that databases, package managers, and webservers actually generate.
The numbers
Approximate performance comparison (single drive, no RAID, modern hardware):
| Metric | SATA SSD | NVMe SSD (PCIe 4.0) | NVMe vs SATA |
|---|---|---|---|
| Sequential read | ~550 MB/s | ~7,000 MB/s | ~13x |
| Sequential write | ~520 MB/s | ~5,500 MB/s | ~10x |
| Random read IOPS (4K) | ~95,000 | ~750,000 | ~8x |
| Random write IOPS (4K) | ~85,000 | ~600,000 | ~7x |
| Read latency | ~50–80 µs | ~10–20 µs | ~4x lower |
| Queue depth advantage | 32 cmds | 64K cmds × 64K queues | massive |
Note: shared VPS workloads on a host don't see full single-drive numbers — you're sharing the drive with other tenants. But the relative advantage of NVMe over SATA is preserved. A 4 vCPU NVMe VPS will typically deliver 50,000-150,000 IOPS to your filesystem; the equivalent SATA VPS delivers 8,000-25,000 IOPS.
Real-world impact on common workloads
Databases
This is where NVMe matters most. Databases generate a constant stream of small random reads and writes — index lookups, journal writes, page cache flushes. A query that does an index scan on a moderately-sized table involves dozens to thousands of small disk reads. Going from SATA to NVMe doesn't make queries 13x faster (CPU time and locking matter too) but it eliminates disk as the bottleneck.
Typical Postgres/MySQL improvements moving from SATA to NVMe:
- Cold queries (data not cached in RAM): 3-6x faster
- Warm queries (data cached): no change (RAM is RAM)
- Bulk loads / pg_restore / mysqldump restores: 4-8x faster
- Backup/restore wall-clock time: 5-10x faster
- Tail latency under load: dramatically better — fewer slow queries
WordPress and PHP applications
WordPress generates a lot of small file reads (PHP files, theme assets, plugin code), database queries (each page load can hit MySQL 30-200 times), and cache reads/writes. NVMe affects all three. A WordPress page that takes 800ms to render on SATA might take 250ms on NVMe with no other changes.
Docker and container workloads
Docker image extraction, layer mounting, and container filesystem operations are I/O-heavy in bursts. docker pull followed by docker run can saturate disk I/O. On NVMe, image extraction is 4-7x faster; container start times are noticeably snappier.
Git operations
Cloning a large repo, running git status on a big working tree, switching branches that touch many files — all of this is small-random-I/O heavy. git clone of the Linux kernel takes 10-15 minutes on SATA, 2-3 minutes on NVMe.
Compilation and build processes
Compiling involves reading lots of small source files, writing lots of object files, and linking. Modern build systems parallelize aggressively. NVMe's deep command queues let parallel compilation actually saturate the storage. C++ projects, Rust builds, and Webpack bundling all benefit measurably.
Video streaming and large file delivery
Less benefit. Sequential reads of large files are bandwidth-limited, and SATA at 550 MB/s is plenty for most streaming workloads (a 4K video stream is ~25 Mbps = 3 MB/s). Where NVMe helps: many concurrent streams competing for the disk, or transcoding pipelines that do mixed read-write.
Static file serving
If your webserver is mostly serving static files smaller than your RAM cache, the OS page cache absorbs almost all reads — disk type barely matters. NVMe still helps for cache misses, but the day-to-day difference is small.
When NVMe matters most (and least)
NVMe matters most:
- Database-backed applications (Postgres, MySQL, MongoDB)
- Containers and Docker
- WordPress and complex CMS
- CI/CD runners, build servers
- Git-heavy workflows
- Self-hosted apps with file metadata operations (Nextcloud, Plex catalog, Jellyfin)
- Anything I/O-bound under concurrent load
NVMe matters less:
- CPU-bound workloads (video transcoding, ML inference) — disk is rarely the bottleneck
- Memory-bound workloads (in-memory caches, Redis) — disk barely used
- Network-bound workloads (proxy servers, VPNs) — bottleneck is bandwidth, not disk
- Static asset serving where files fit in RAM
- Large-file sequential streaming (single-stream video delivery)
For most general-purpose VPS workloads, NVMe is a clear win. The exceptions are workloads where disk is genuinely not on the hot path.
NVMe on every plan, no upcharge
OliveVPS ships NVMe storage on every tier, every region. No "Premium NVMe" upsell. Just fast disks as the default, the way it should be. Starting at $3.99/mo.
See VPS Plans →NVMe in VPS hosting
VPS providers vary wildly on storage:
- Bargain bin (sub-$2/mo plans): often still SATA SSD, sometimes oversubscribed to the point that even SATA performance isn't real. Avoid.
- DigitalOcean basic Droplets: SATA SSD. Their "Premium" tier is NVMe at higher price.
- Linode shared CPU: NVMe by default since 2022.
- Vultr standard plans: NVMe by default.
- Hetzner CX/CPX: NVMe by default.
- AWS EC2: depends on instance type and EBS choice. The default EBS gp3 is fine but not NVMe-grade unless you provision IOPS. Local NVMe instance store is on i3/i4/im4gn instances at high cost.
- OliveVPS: NVMe on every plan, every region.
The market has mostly converged on NVMe for new VPS plans, but legacy plans and budget tiers still ship SATA. If you're comparing providers, check explicitly — it's not always front-and-center in marketing copy.
How to check what your VPS uses
From inside a Linux VPS, you can verify what type of storage you're on. The exact path varies by virtualization, but these commands typically show the truth:
# See block devices and their type
lsblk -o NAME,SIZE,TYPE,MODEL,ROTA
# ROTA=0 means non-rotational (SSD or NVMe)
# ROTA=1 means spinning disk (HDD)
# Check if devices appear as nvme*
ls /dev/nvme* # NVMe drives
ls /dev/sd* # SATA/SCSI drives (including SATA SSD)
# Quick I/O test (4K random reads, 30 seconds)
fio --name=randread --ioengine=libaio --iodepth=32 \
--rw=randread --bs=4k --direct=1 --size=1G \
--numjobs=4 --runtime=30 --group_reporting
From the fio output, look at the IOPS line. NVMe-class VPS storage should give 50,000+ IOPS at iodepth=32. SATA-class will typically give 10,000-30,000. If you're seeing under 5,000 IOPS, your provider is heavily oversubscribed regardless of what they call the storage.
FAQ
Is NVMe always better than SATA SSD?
For random I/O workloads (databases, webservers, containers): yes, meaningfully. For pure sequential workloads on small files: marginal difference. For practical VPS use cases, NVMe is the clear default — the price gap has mostly closed and the performance benefit is real.
Will I notice the difference if I'm just running a small WordPress blog?
Probably yes, but less dramatically than with database-heavy apps. WordPress page rendering involves PHP file reads + MySQL queries + cache I/O — all of which benefit from NVMe. Cold page loads typically render 2-3x faster on NVMe. Once the OS cache warms up, the difference shrinks.
Is NVMe more reliable than SATA SSD?
Endurance ratings depend on the specific drive, not the protocol. Modern enterprise NVMe and SATA SSDs have similar mean time between failures. What matters is the drive class (consumer vs enterprise) — enterprise drives have higher endurance and better firmware regardless of NVMe vs SATA.
Why don't all VPS providers use NVMe then?
Legacy hardware. Servers bought 5-7 years ago shipped with SATA. Replacing them costs money. Many providers operate mixed fleets — newer plans on NVMe, older plans on SATA. Some keep SATA on entry tiers to differentiate "Premium NVMe" upsells.
Does OliveVPS charge extra for NVMe?
No. NVMe is the default and only option on every plan, every region, every tier from $3.99/mo up. We don't have a "Premium NVMe" upsell because NVMe shouldn't be premium in 2026.