Your service died and the only message in the logs is "Killed." Your dmesg output is full of "Out of memory: Killed process X" lines. This is the OOM (out-of-memory) killer at work — Linux's last-resort mechanism for keeping the system alive when memory runs out. Fixing it permanently means understanding why memory pressure happened, not just adding more swap. This guide walks through both immediate triage and root-cause fixes.

💀

TL;DR: Check dmesg | grep -i oom to confirm OOM. Run free -h to see current pressure. Find the memory hog with ps aux --sort=-%rss | head. The fix is usually one of: (1) restart a leaky process, (2) lower app's memory config, (3) add swap as cushion, or (4) upgrade your VPS. Adding swap alone treats the symptom — find the cause.

What we'll cover

  1. What the OOM killer actually does
  2. Confirming OOM happened
  3. Diagnosing current memory state
  4. Finding the memory hog
  5. Common causes (and fixes)
  6. Swap: when it helps, when it doesn't
  7. Tuning OOM behavior
  8. Prevention
  9. FAQ

What the OOM killer actually does

When Linux runs out of physical memory and swap, it has two choices: crash the entire system, or pick a process and kill it. The OOM killer is the latter — a kernel routine that scores running processes (based on memory use, niceness, runtime, and a few other factors) and kills the highest-scoring one to reclaim memory.

This is desperate behavior. The kernel only does it when allocation requests genuinely cannot be satisfied. The killed process doesn't get to clean up — it's just gone, mid-operation. That's why OOM kills often leave services in weird states (corrupted SQLite databases, half-written files, hung connections).

Confirming OOM happened

Check the kernel log for OOM events:

dmesg -T | grep -i 'killed process\|out of memory'

# or check the journal
sudo journalctl --since '1 hour ago' | grep -i oom

You're looking for lines like:

[Sat May  3 09:42:01 2026] Out of memory: Killed process 1847 (node) total-vm:1234567kB, anon-rss:987654kB

If you see these, OOM definitely happened. The process name and memory it was holding tell you what got killed. If you don't see OOM messages but a process disappeared anyway, it might be: a manual kill, a service crash, or a hard system OOM that didn't get logged.

Diagnosing current memory state

free -h

Output looks like:

               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       2.9Gi       180Mi        45Mi       720Mi       650Mi
Swap:          2.0Gi       1.4Gi       600Mi

What to look at:

Also useful:

# Real-time monitor
vmstat 1 5

# Per-process memory in human-readable form
sudo apt install smem -y
smem -tk -s rss

Finding the memory hog

# Top 10 by resident memory
ps aux --sort=-%mem | head -11

# Same but human-friendly columns
ps -eo pid,user,rss,vsz,comm --sort=-rss | head

RSS (resident set size) is what's actually in physical memory. VSZ (virtual size) includes mapped files and unallocated address space — usually much larger than real memory use. Trust RSS for "how much is this process actually consuming."

If a single process is using 80%+ of available memory, that's your hog. If memory is spread across many processes and total is creeping up over time, that's a different problem (memory leak in something long-running, or you genuinely need more RAM).

Common causes (and fixes)

1. Memory leak in a long-running process

Symptom: process RSS grows steadily over hours/days; restart fixes it temporarily. Common in Node.js apps with detached event listeners, Python apps holding references in caches that never expire, PHP-FPM with leaky extensions.

Short-term fix: restart the process on a schedule (cron) or use a process manager that auto-restarts on memory threshold (PM2's max_memory_restart, systemd's MemoryMax=). Long-term: profile the app and fix the leak. Tools: valgrind (C/C++), memray (Python), Node's --inspect with Chrome DevTools.

2. Database not configured for the VPS size

PostgreSQL, MySQL, MongoDB all default to memory configs assuming a much bigger box. On a 2GB VPS, MySQL's default innodb_buffer_pool_size can be set absurdly high. Tune the database to your VPS:

3. Too many worker/child processes

Apache prefork, PHP-FPM, gunicorn — all spawn worker processes. Default counts are often too high for small VPS. Each worker uses memory; multiply by worker count to see the real footprint. Lower the count until total fits comfortably with headroom.

4. JVM heap not bounded

Java apps default to a heap size based on host memory — and on a VPS, that detection is often wrong. Always set -Xmx explicitly to a value that fits your VPS minus other services, e.g. -Xmx1g on a 2GB VPS that runs other things.

5. Container limits not set

Running Docker without memory limits means a single container can OOM the entire host. Always set --memory in docker run or mem_limit in docker-compose. Same for K8s pods (resources.limits.memory).

Swap: when it helps, when it doesn't

Swap is disk used as overflow memory. It does help in two specific cases:

  1. Brief memory spikes that exceed RAM by a small amount — swap absorbs them without OOM.
  2. Long-tail allocations that the kernel knows are cold — swapping these out makes room for hot data.

It does not help when:

Add 1-2GB of swap on a small VPS as a cushion, but do not rely on it. To add swap:

sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Tune swappiness lower so the kernel prefers RAM:

echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Tuning OOM behavior

You can influence which process the OOM killer picks. Each process has an oom_score_adj (range -1000 to 1000); lower = less likely to be killed. Protect critical services:

# For a running PID — score range -1000 to 1000
echo -500 | sudo tee /proc/$(pidof postgres)/oom_score_adj

# Persistent via systemd unit
# In /etc/systemd/system/myservice.service:
# [Service]
# OOMScoreAdjust=-500

Be careful — protecting one process means another gets killed instead. Prefer fixing the underlying memory issue.

Prevention

Need more RAM? Pro plan at $7.99/mo

4GB RAM, 2 dedicated vCPU, 80GB NVMe — comfortable for self-hosted stacks, small databases, multiple services. Free upgrade migration from Starter.

See VPS Plans →

FAQ

Why did my Node.js process get OOM-killed?

Most likely a memory leak (event listeners, unclosed streams, unbounded caches). Set --max-old-space-size=<MB> to bound V8's heap. Use a process manager (PM2, systemd) to restart on memory threshold while you find the leak.

Is it safe to disable the OOM killer?

No. With it disabled, an out-of-memory situation crashes the entire kernel. The OOM killer is a safety net — fix what's eating memory, don't disable the safety.

How much swap should I have?

On a VPS, 1-2GB regardless of RAM size. Swap on a VPS is on the same disk as your data, so heavy swap use destroys disk performance. The classic 'swap = 2× RAM' rule is for desktops/laptops with hibernation, not servers.

Can I get an alert before OOM happens?

Yes. Most monitoring tools (Netdata, Prometheus node_exporter, Datadog) export memory metrics. Alert when available memory drops below 15% for 10+ minutes — that gives you time to investigate before OOM.

Does adding swap fix OOM?

Sometimes — if your overflow is small and brief. For sustained pressure or memory leaks, swap delays OOM but slows everything down. Real fix is more RAM or fixing the leak.

🐱
The OliveVPS Team

Memory pressure is one of the more frustrating VPS issues — symptoms are subtle until OOM strikes. We've found the prevention checklist above catches most of it before it bites.