Your VPS is at 100% CPU. The site is slow, SSH is laggy, and you don't know why. This guide walks through diagnosing it the right way: identifying the process, understanding whether it's user code, kernel time, or I/O wait, and applying the right fix. Most high-CPU situations have one of about six root causes, and once you know which one, the fix is usually quick.

🔥

TL;DR: Run top, find the process at the top of the %CPU column. If it's your own app, profile it. If it's kswapd0, you're out of RAM. If it's iowait dominating, your disk is the bottleneck. If it's something you don't recognize, you may have been compromised — investigate. The diagnostic tree below covers the common cases.

What we'll cover

  1. First steps: identify the process
  2. Interpreting top output
  3. Case 1: Your application is the culprit
  4. Case 2: kswapd0 — out of RAM
  5. Case 3: High iowait — disk bottleneck
  6. Case 4: Runaway cron or scheduled task
  7. Case 5: Cryptominer or malware
  8. Case 6: Noisy neighbor (shared CPU only)
  9. Prevention
  10. FAQ

First steps: identify the process

SSH into the VPS and run:

top -o %CPU

This sorts processes by CPU usage. The process eating CPU will be at the top. Note its name and PID. If top isn't installed (rare), use:

ps aux --sort=-%cpu | head -10

If you have htop available (apt install htop), it's a nicer interface for the same information — color-coded, scrollable, easier to read.

Interpreting top output

The header line of top shows you what kind of CPU usage you have:

%Cpu(s):  82.3 us,  4.1 sy,  0.0 ni, 12.5 id,  1.1 wa,  0.0 hi,  0.0 si,  0.0 st

These four numbers tell you what kind of problem you have. us high → app problem. wa high → disk problem. st high → host problem. sy high → kernel/system call storm. Treat each differently.

Case 1: Your application is the culprit

Top of the list is your Node.js, Python, PHP, Java, or whatever process. %CPU is high, us is high. The app is doing work — possibly more than it should.

Get more detail with strace (system calls) or perf (CPU profiling):

# Quick: what syscalls is it making?
strace -c -p 

# Better: profile where it's spending CPU time
perf top -p 

Common app-side causes: an infinite loop after a code change, a runaway database query, a cache that's not actually caching, a regex with catastrophic backtracking. Application-level fixes (code change, query optimization, caching layer) are the only real solution. Restarting the app papers over it temporarily.

Case 2: kswapd0 — out of RAM

If kswapd0 is at the top of the CPU list, you're out of RAM and the kernel is desperately swapping. The CPU usage is a symptom; the disease is memory pressure. Check:

free -h
swapon --show

If "available" in free -h is near zero and swap is being heavily used, you need more RAM, less running, or both. Quick fixes:

sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Real fix: upgrade your VPS RAM, or fix the memory leak in your app. See our OOM troubleshooting guide for more depth.

Case 3: High iowait — disk bottleneck

If %CPU on individual processes is moderate but wa in the header is >20%, your disk is the bottleneck, not your CPU. Find what's hammering the disk:

sudo iotop -o

# alternative if iotop unavailable
sudo apt install iotop -y

Common iowait culprits: a database doing full table scans, log files growing without rotation, a backup process running during peak hours, NVMe over-provisioned SSD throttling. On NVMe-backed VPS (like ours) iowait is rare under normal load — if you're seeing it, something specific is hammering disk.

Case 4: Runaway cron or scheduled task

Sometimes a cron job hangs, gets stuck in a loop, or starts spawning child processes faster than they finish. Check:

# List all crons running on the system
crontab -l
sudo cat /etc/crontab
ls /etc/cron.*

# Find processes started by cron
ps -ef --forest | grep -E 'CRON|cron'

If you find a runaway, kill it with kill -9 <PID> and fix the cron entry to add a timeout. Wrapping commands in timeout 60 prevents indefinite hangs:

*/5 * * * * /usr/bin/timeout 60 /usr/local/bin/myscript.sh

Case 5: Cryptominer or malware

If the top process is something you don't recognize — random alphanumeric name, running from /tmp or /dev/shm, or with deceptive names like kworkerd or kdevtmpfsi — you've likely been compromised. Common attack vectors: weak SSH password, exposed Redis/Mongo with no auth, vulnerable web app.

Don't just kill the process — it'll come back. Steps:

  1. Identify the process binary: ls -la /proc/<PID>/exe shows the file path.
  2. Check what's keeping it alive: cron entries (crontab -l for every user), systemd services, ~/.bashrc, ~/.profile, suspicious entries in /etc/rc.local.
  3. If you can't be sure you've cleaned it up — and usually you can't — rebuild the VPS from scratch. Restore data from backup, change all passwords and SSH keys.
  4. Audit what got in: SSH logs (/var/log/auth.log), web server logs, app logs. Plug the hole.

Compromise is grim but recoverable. Don't trust a cleaned VPS — rebuild. Recovery guide if you're locked out →

Case 6: Noisy neighbor (shared CPU only)

If %CPU(s) st (steal) is consistently above 5%, the hypervisor is giving your CPU cycles to other tenants. This only happens on shared-CPU VPS plans (often called "burstable" or "shared"). The fix isn't on your end — it's on the host's.

Workarounds:

More on this in our dedicated vs shared CPU explainer.

Prevention

VPS plans with dedicated CPU cores

Every OliveVPS plan uses dedicated CPU cores — no steal time, no noisy neighbors, predictable performance under load. From $3.99/mo.

See VPS Plans →

FAQ

What's a normal CPU usage for a VPS?

Steady-state under 50% leaves headroom for traffic spikes. Sustained over 80% means you've outgrown the plan or have a runaway process. Brief spikes to 100% during specific operations (build, backup, big query) are fine.

Why is kswapd0 using 100% CPU?

You're out of RAM. kswapd0 is the kernel's memory swap daemon — when memory pressure is high, it works overtime moving pages to swap. The fix is more RAM or fewer running processes, not killing kswapd0 (which would crash the system).

Can I limit a process's CPU usage?

Yes. cpulimit caps an existing process: cpulimit -p PID -l 50 limits to 50%. For long-term control, systemd unit files support CPUQuota=50%. cgroups give you the fullest control but are complex.

How do I find which user is using the most CPU?

top with the 'u' key, or ps aux --sort=-%cpu | awk '{print $1}' | sort | uniq -c | sort -rn | head. Useful when multiple users share a VPS.

Is high CPU always bad?

No. A build server should hit 100% during builds — that's it doing its job. A web server hitting 100% under traffic means you're saturating the resources you're paying for, which is fine if response times stay reasonable. High CPU is only bad when it's unexpected or causes user-visible problems.

🐱
The OliveVPS Team

We've debugged enough "my VPS is slow" tickets to know that nine times out of ten, it's one of the cases above. The trick is asking the right diagnostic question first.