JavaScript that you require()
/import
in Node.js runs on one main thread (the event-loop thread).
But Node itself is not strictly single-threaded—there are other threads working behind the scenes, and you can spin up additional ones when you need to.
Layer | Threads used | What actually runs there |
---|---|---|
Event loop (libuv) | 1 | Executes your JS callbacks, timers, Promise jobs, network I/O completion callbacks. |
libuv thread-pool | 4 by default (UV_THREADPOOL_SIZE ≤ 128) |
CPU-bound or blocking tasks implemented in C/C++ addons: file I/O, DNS (non-UDP), crypto, zlib, etc. |
Worker Threads (require('worker_threads') ) |
N (you create) | A full, isolated V8 instance per worker; you pass data via structured-clone + MessagePort . |
Cluster / child_process | N (you fork) | Independent OS processes that share the same server port via the cluster scheduler. |
// main.js
setTimeout(() => console.log('event-loop thread id:', process.threadId));
All the JavaScript you write here runs on a single OS thread.
That design keeps concurrency simple—no data races, no locks—at the cost of blocking if a callback takes too long.
Long-running C++ operations are off-loaded to a worker pool managed by libuv, so your event loop can keep accepting connections while, say, fs.readFile()
waits on disk.
// Even though this is synchronous in *concept*, libuv moves it to the pool
fs.readFile('big.bin', (err, data) => { /* back on the main thread */ });
import { Worker } from 'node:worker_threads';
new Worker('./cpuTask.js', { workerData: 42 })
.on('message', (msg) => console.log('done:', msg));
child_process
and shares memory via SharedArrayBuffer
if needed.For stateless web servers it’s often easier to fork multiple full Node processes—one per CPU core—and let the OS schedule them.
node --conditions=production server.js # with cluster inside