"2 vCPU" on a VPS plan can mean two completely different things. On a shared CPU plan, those vCPUs are timeshared with other tenants — you get whatever portion of the physical core is available right now, which depends on what your neighbors are doing. On a dedicated CPU plan, those vCPUs are pinned to physical cores reserved for you alone. The naming hides this distinction. The performance gap is enormous, especially under load. This guide explains how shared and dedicated CPU actually work, when you can ignore the difference, and when it'll bite you hard.

⚙️

TL;DR: Shared CPU = timeshared with other tenants, can burst higher when neighbors idle, suffers when neighbors are busy. Dedicated CPU = your cores, predictable performance, costs more. For static sites and idle workloads: shared is fine. For databases, real-time apps, game servers, anything performance-critical: dedicated. OliveVPS Starter is shared; Pro and above are dedicated.

What we'll cover

  1. How CPU virtualization actually works
  2. Shared / burstable cores
  3. Dedicated cores
  4. The noisy neighbor problem
  5. Which workloads need which
  6. Provider naming conventions
  7. How to tell what you're getting
  8. FAQ

How CPU virtualization actually works

A physical server has some number of CPU cores. Modern servers commonly have 32, 64, or 128 physical cores per socket, often 2 sockets per box. Each core can execute one or two threads (with hyperthreading / SMT). Total "logical CPUs" on a typical VPS host is in the hundreds.

The hypervisor (KVM in our case) presents virtual CPUs (vCPUs) to each VPS. A vCPU is a software construct — it's the hypervisor's promise of "you can run code as if you have a CPU here." Behind the scenes, the hypervisor schedules vCPUs onto physical cores using a CPU scheduler. The scheduler decides which vCPU runs on which physical core at any given moment.

The scheduling policy determines what kind of CPU you really have:

Shared / burstable cores

Shared CPU plans pack many tenants per physical core. The economics are obvious: a 64-core host can sell 4 vCPU instances to 64+ tenants if they're shared, but only 16 if they're dedicated. Lower price per instance, more instances per host.

The mechanics:

This is sometimes called "burstable" performance. AWS calls them T-series instances; DigitalOcean calls them Basic Droplets; many providers don't explicitly distinguish them in marketing.

The good case: your workload is bursty. Quick API requests, occasional batch jobs, dev/test environments. You're idle most of the time, occasionally need full CPU briefly. Shared works great — you get full performance when you need it, pay much less.

The bad case: your workload is sustained. Database query under continuous load, video encoding, ML inference, busy game server. You exhaust credits in minutes, then you're throttled and performance tanks. Or your neighbor exhausts credits trying to do the same thing, and now everyone is fighting for cycles.

Dedicated cores

Dedicated CPU plans pin vCPUs to physical cores reserved for one tenant. If you bought a 2 vCPU plan, you have 2 physical cores worth of CPU time available, always, no burst limit, no throttling. Your neighbors don't affect you.

The trade-off is cost. A dedicated 2 vCPU plan typically costs 2-3x what a shared 2 vCPU plan costs at the same provider, because the host can pack fewer instances per physical box.

What you get:

Dedicated is the right model for production workloads where consistent performance matters. AWS calls them M-series, C-series, R-series; DigitalOcean has CPU-Optimized and General Purpose Droplets; Linode has Dedicated CPU plans.

The noisy neighbor problem

The classic problem with shared CPU: another tenant on your physical core is running something CPU-intensive, eating up the shared budget, and you get throttled. Their CPU-burning crypto miner, runaway process, or actively-attacked website becomes your performance problem.

Modern hypervisors mitigate this with various scheduling tricks (cgroup limits, fair queuing, weighted scheduling). Quality of mitigation varies by provider. Aggressive oversubscription multiplies the problem — if a host has 10 tenants per core, the chance any given tenant is busy at any given moment is much higher than at 3 tenants per core.

Symptoms of being hit by noisy neighbors on shared CPU:

The "steal time" metric is the giveaway. If top shows >5% steal time consistently, you have noisy neighbors and you should consider upgrading to dedicated cores.

Which workloads need which

Shared / burstable is fine for:

Dedicated is worth paying for:

Dedicated cores from $7.99/mo

OliveVPS Starter is shared (and honest about it). Pro and above ship dedicated cores — pinned, predictable, no noisy neighbors. The right fit for production workloads.

See VPS Plans →

Provider naming conventions

"Dedicated" and "shared" aren't standardized terms. Here's what providers actually call these things:

ProviderShared / burstable nameDedicated name
AWS EC2T-series (T3, T3a, T4g)M, C, R, X series
DigitalOceanBasic DropletsCPU-Optimized, General Purpose
LinodeShared CPUDedicated CPU
VultrCloud Compute (Regular)Dedicated CPU plans
HetznerCX, CPX (shared vCPU)CCX (Dedicated vCPU)
OliveVPSStarter planPro, Premium, Enterprise plans

If you're looking at a VPS plan and the page doesn't explicitly say "dedicated CPU," assume it's shared/burstable. Reputable providers state "dedicated" clearly when they offer it because it's a selling point.

How to tell what you're getting

From inside a Linux VPS, you can check whether you're on shared or dedicated:

# Check steal time over a few minutes of normal load
top  # look at %st column
# <1% steal = essentially dedicated or low-contention shared
# 1-5% steal = mild contention, shared host
# >5% steal = noisy neighbors, shared with heavy contention

# Or use mpstat for more detail
sudo apt install sysstat
mpstat 1 60  # 1 second samples for 60 seconds
# %steal column shows what was stolen by hypervisor

# Run a CPU stress test — see if you can sustain load
sudo apt install stress-ng
stress-ng --cpu 2 --timeout 300s --metrics-brief
# On dedicated CPU, throughput stays constant
# On shared/burstable, throughput drops after credits exhaust

If you bought a "2 vCPU" plan but only see 30-50% sustained CPU under stress test, you're being throttled — that's a shared/burstable plan that hit its limit. Dedicated would sustain 200% (= 100% × 2 cores).

FAQ

Is shared CPU always bad?

No. For workloads that idle most of the time and only occasionally need CPU, shared/burstable is genuinely the better deal — full performance when you need it, much lower price. Most personal websites, side projects, and dev environments are fine on shared.

How much performance do I lose on shared CPU?

Depends entirely on neighbor activity and your workload pattern. Best case: indistinguishable from dedicated. Worst case: 70-80% throttling under sustained load. Average case for moderate workloads: 10-30% lower than dedicated, with higher variance.

Why is OliveVPS Starter shared?

Starter is our entry-level tier targeting workloads that don't need dedicated CPU — small personal sites, dev environments, learning Linux, simple bots. Making it dedicated would force the price up to $7-8/mo. Pro is dedicated specifically because that's where production workloads start.

Can I upgrade from shared to dedicated?

Yes — Starter → Pro is a one-click upgrade in the control panel. Brief reboot to migrate to a dedicated-CPU host, then you're on pinned cores. No data migration, no IP change.

What about hyperthreading? Are vCPUs threads or full cores?

Industry convention: 1 vCPU = 1 hyperthread, not 1 full core. Two vCPUs sharing the same physical core (SMT siblings) gets you maybe 1.3-1.5x single-core performance, not 2x. This applies to both shared and dedicated plans at most providers. For workloads that benefit from full physical cores (compilation, certain ML), look for "dedicated CPU with no hyperthreading" plans.

🐱
The OliveVPS Team

We're upfront about which plans are shared and which are dedicated. The honesty saves you migration headaches later.