"2 vCPU" on a VPS plan can mean two completely different things. On a shared CPU plan, those vCPUs are timeshared with other tenants — you get whatever portion of the physical core is available right now, which depends on what your neighbors are doing. On a dedicated CPU plan, those vCPUs are pinned to physical cores reserved for you alone. The naming hides this distinction. The performance gap is enormous, especially under load. This guide explains how shared and dedicated CPU actually work, when you can ignore the difference, and when it'll bite you hard.
TL;DR: Shared CPU = timeshared with other tenants, can burst higher when neighbors idle, suffers when neighbors are busy. Dedicated CPU = your cores, predictable performance, costs more. For static sites and idle workloads: shared is fine. For databases, real-time apps, game servers, anything performance-critical: dedicated. OliveVPS Starter is shared; Pro and above are dedicated.
What we'll cover
How CPU virtualization actually works
A physical server has some number of CPU cores. Modern servers commonly have 32, 64, or 128 physical cores per socket, often 2 sockets per box. Each core can execute one or two threads (with hyperthreading / SMT). Total "logical CPUs" on a typical VPS host is in the hundreds.
The hypervisor (KVM in our case) presents virtual CPUs (vCPUs) to each VPS. A vCPU is a software construct — it's the hypervisor's promise of "you can run code as if you have a CPU here." Behind the scenes, the hypervisor schedules vCPUs onto physical cores using a CPU scheduler. The scheduler decides which vCPU runs on which physical core at any given moment.
The scheduling policy determines what kind of CPU you really have:
- Shared / burstable: Multiple vCPUs from different VPS instances can be scheduled on the same physical core. Each gets a fraction of total CPU time. If neighbors are idle, you can use more.
- Dedicated / pinned: Specific vCPUs are pinned to specific physical cores reserved for one VPS. No sharing. Predictable performance.
Shared / burstable cores
Shared CPU plans pack many tenants per physical core. The economics are obvious: a 64-core host can sell 4 vCPU instances to 64+ tenants if they're shared, but only 16 if they're dedicated. Lower price per instance, more instances per host.
The mechanics:
- You're allocated a "baseline" CPU credit rate — say, 20% of one physical core sustained.
- If you don't use your baseline, you accumulate credits.
- If you need to burst above baseline, you spend credits and run faster.
- If you exhaust credits and other tenants are busy, you're throttled back to baseline.
This is sometimes called "burstable" performance. AWS calls them T-series instances; DigitalOcean calls them Basic Droplets; many providers don't explicitly distinguish them in marketing.
The good case: your workload is bursty. Quick API requests, occasional batch jobs, dev/test environments. You're idle most of the time, occasionally need full CPU briefly. Shared works great — you get full performance when you need it, pay much less.
The bad case: your workload is sustained. Database query under continuous load, video encoding, ML inference, busy game server. You exhaust credits in minutes, then you're throttled and performance tanks. Or your neighbor exhausts credits trying to do the same thing, and now everyone is fighting for cycles.
Dedicated cores
Dedicated CPU plans pin vCPUs to physical cores reserved for one tenant. If you bought a 2 vCPU plan, you have 2 physical cores worth of CPU time available, always, no burst limit, no throttling. Your neighbors don't affect you.
The trade-off is cost. A dedicated 2 vCPU plan typically costs 2-3x what a shared 2 vCPU plan costs at the same provider, because the host can pack fewer instances per physical box.
What you get:
- Predictable performance — same speed at 3am as at 3pm
- No throttling under sustained load
- Better tail latency (worst-case response time stays low)
- Easier capacity planning (you know exactly what you have)
Dedicated is the right model for production workloads where consistent performance matters. AWS calls them M-series, C-series, R-series; DigitalOcean has CPU-Optimized and General Purpose Droplets; Linode has Dedicated CPU plans.
The noisy neighbor problem
The classic problem with shared CPU: another tenant on your physical core is running something CPU-intensive, eating up the shared budget, and you get throttled. Their CPU-burning crypto miner, runaway process, or actively-attacked website becomes your performance problem.
Modern hypervisors mitigate this with various scheduling tricks (cgroup limits, fair queuing, weighted scheduling). Quality of mitigation varies by provider. Aggressive oversubscription multiplies the problem — if a host has 10 tenants per core, the chance any given tenant is busy at any given moment is much higher than at 3 tenants per core.
Symptoms of being hit by noisy neighbors on shared CPU:
- Your application is fast sometimes, slow others, with no obvious pattern in your code
topshows your processes using less CPU than they "should" — they're being denied cycles- Steal time (
%stintop) is high — that's CPU that the hypervisor told the kernel about but didn't actually give you - Database tail latency is high (p99, p999 latencies much worse than p50)
The "steal time" metric is the giveaway. If top shows >5% steal time consistently, you have noisy neighbors and you should consider upgrading to dedicated cores.
Which workloads need which
Shared / burstable is fine for:
- Personal websites and blogs
- Development and staging environments
- Low-traffic SaaS
- Static site origins
- Telegram/Discord bots with light usage
- SSH bastion / jump hosts
- VPN servers (network-bound, not CPU-bound)
- Anything that idles 90%+ of the time
Dedicated is worth paying for:
- Production databases (Postgres, MySQL, MongoDB) under real load
- Game servers — tick rate consistency matters
- Real-time applications (chat, voice, video)
- Forex trading and time-sensitive financial workloads
- CI/CD runners — consistent build times matter
- Production WordPress / e-commerce with traffic
- Container orchestration (Kubernetes nodes)
- Anything where p99 latency matters more than average
- Workloads with sustained CPU usage above 30%
Dedicated cores from $7.99/mo
OliveVPS Starter is shared (and honest about it). Pro and above ship dedicated cores — pinned, predictable, no noisy neighbors. The right fit for production workloads.
See VPS Plans →Provider naming conventions
"Dedicated" and "shared" aren't standardized terms. Here's what providers actually call these things:
| Provider | Shared / burstable name | Dedicated name |
|---|---|---|
| AWS EC2 | T-series (T3, T3a, T4g) | M, C, R, X series |
| DigitalOcean | Basic Droplets | CPU-Optimized, General Purpose |
| Linode | Shared CPU | Dedicated CPU |
| Vultr | Cloud Compute (Regular) | Dedicated CPU plans |
| Hetzner | CX, CPX (shared vCPU) | CCX (Dedicated vCPU) |
| OliveVPS | Starter plan | Pro, Premium, Enterprise plans |
If you're looking at a VPS plan and the page doesn't explicitly say "dedicated CPU," assume it's shared/burstable. Reputable providers state "dedicated" clearly when they offer it because it's a selling point.
How to tell what you're getting
From inside a Linux VPS, you can check whether you're on shared or dedicated:
# Check steal time over a few minutes of normal load
top # look at %st column
# <1% steal = essentially dedicated or low-contention shared
# 1-5% steal = mild contention, shared host
# >5% steal = noisy neighbors, shared with heavy contention
# Or use mpstat for more detail
sudo apt install sysstat
mpstat 1 60 # 1 second samples for 60 seconds
# %steal column shows what was stolen by hypervisor
# Run a CPU stress test — see if you can sustain load
sudo apt install stress-ng
stress-ng --cpu 2 --timeout 300s --metrics-brief
# On dedicated CPU, throughput stays constant
# On shared/burstable, throughput drops after credits exhaust
If you bought a "2 vCPU" plan but only see 30-50% sustained CPU under stress test, you're being throttled — that's a shared/burstable plan that hit its limit. Dedicated would sustain 200% (= 100% × 2 cores).
FAQ
Is shared CPU always bad?
No. For workloads that idle most of the time and only occasionally need CPU, shared/burstable is genuinely the better deal — full performance when you need it, much lower price. Most personal websites, side projects, and dev environments are fine on shared.
How much performance do I lose on shared CPU?
Depends entirely on neighbor activity and your workload pattern. Best case: indistinguishable from dedicated. Worst case: 70-80% throttling under sustained load. Average case for moderate workloads: 10-30% lower than dedicated, with higher variance.
Why is OliveVPS Starter shared?
Starter is our entry-level tier targeting workloads that don't need dedicated CPU — small personal sites, dev environments, learning Linux, simple bots. Making it dedicated would force the price up to $7-8/mo. Pro is dedicated specifically because that's where production workloads start.
Can I upgrade from shared to dedicated?
Yes — Starter → Pro is a one-click upgrade in the control panel. Brief reboot to migrate to a dedicated-CPU host, then you're on pinned cores. No data migration, no IP change.
What about hyperthreading? Are vCPUs threads or full cores?
Industry convention: 1 vCPU = 1 hyperthread, not 1 full core. Two vCPUs sharing the same physical core (SMT siblings) gets you maybe 1.3-1.5x single-core performance, not 2x. This applies to both shared and dedicated plans at most providers. For workloads that benefit from full physical cores (compilation, certain ML), look for "dedicated CPU with no hyperthreading" plans.