PushBackLog

Memory Management

Advisory enforcement Complete by PushBackLog team
Topic: performance Topic: runtime Skillset: backend Skillset: frontend Technology: JavaScript Technology: TypeScript Technology: Node.js Stage: execution Stage: review Stage: operations

Memory Management

Status: Complete
Category: Performance
Default enforcement: Advisory
Author: PushBackLog team


Tags

  • Topic: performance, runtime
  • Skillset: backend, frontend
  • Technology: JavaScript, TypeScript, Node.js
  • Stage: execution, review, operations

Summary

Memory management in managed-runtime languages (JavaScript, TypeScript, Java, Python) is largely automatic via garbage collection, but developers must still avoid common patterns that prevent the GC from reclaiming memory: unbounded caches, retained event listeners, global state accumulation, and closures that hold references longer than intended. Memory leaks in long-running Node.js services manifest as steadily increasing heap size, eventually causing OOM crashes or degraded performance requiring restarts.


Rationale

Garbage collection doesn’t prevent leaks

A memory leak in a GC language is not an unclaimed allocation — it is a live object the GC cannot collect because something still holds a reference to it. The developer wrote code that accidentally keeps that reference alive indefinitely. Common patterns: caching without eviction, global maps that grow without bound, event emitters that accumulate listeners, and closures in long-running operations that capture large objects.

Memory pressure degrades performance before it causes crashes

As the heap fills, the GC runs more frequently and for longer durations. This increases CPU usage, introduces latency spikes (GC pause), and reduces throughput. A service can degrade significantly below its OOM ceiling. Proactive memory profiling catches these issues before they become production incidents.


Guidance

Common leak patterns in Node.js

1. Growing global/module-level maps

// WRONG — global cache with no eviction
const requestCache = new Map<string, Response>();

function getCached(key: string): Response | undefined {
  return requestCache.get(key);
}

function setCached(key: string, value: Response): void {
  requestCache.set(key, value); // Never evicted — grows without bound
}
// CORRECT — use an LRU cache with a size cap
import LRU from 'lru-cache';

const requestCache = new LRU<string, Response>({
  max: 1000,         // Maximum 1000 entries
  ttl: 60_000,       // Entries expire after 60 seconds
});

2. Event listeners not removed

// WRONG — listener added on every call, never removed
class DataProcessor extends EventEmitter {
  process(data: Buffer): void {
    someExternalStream.on('data', (chunk) => this.handleChunk(chunk));
  }
}

// CORRECT — store and remove the listener
class DataProcessor extends EventEmitter {
  private boundHandler: ((chunk: Buffer) => void) | null = null;

  start(): void {
    this.boundHandler = (chunk) => this.handleChunk(chunk);
    someExternalStream.on('data', this.boundHandler);
  }

  stop(): void {
    if (this.boundHandler) {
      someExternalStream.off('data', this.boundHandler);
      this.boundHandler = null;
    }
  }
}

The Node.js EventEmitter will warn when an emitter has more than maxListeners (default: 10) — treat this warning as a bug.


3. Closures retaining large objects

// WRONG — closure retains the entire large object
function setupProcessor(config: LargeConfig): () => void {
  return () => {
    console.log(config.debug.trace.id); // Only needs one field, retains everything
  };
}

// CORRECT — extract only what's needed
function setupProcessor(config: LargeConfig): () => void {
  const traceId = config.debug.trace.id; // Extract before creating closure
  return () => {
    console.log(traceId);
  };
}

4. Timer callbacks holding references

// WRONG — timer prevents garbage collection of 'data'
function startTimer(data: LargeData): void {
  setInterval(() => {
    process(data); // 'data' stays in memory for the lifetime of the timer
  }, 60_000);
  // No way to stop this timer — memory leak
}

// CORRECT — store the timer ID and clear it
class Worker {
  private timer: NodeJS.Timer | null = null;

  start(data: LargeData): void {
    this.timer = setInterval(() => process(data), 60_000);
  }

  stop(): void {
    if (this.timer) {
      clearInterval(this.timer);
      this.timer = null;
    }
  }
}

Profiling Node.js memory

Using --inspect and Chrome DevTools

# Start with inspector enabled
node --inspect server.js

# Open chrome://inspect in Chrome
# Take heap snapshot before load test
# Run load (using k6, autocannon, etc.)
# Take heap snapshot after load
# Compare snapshots — retained objects indicate leaks

Using clinic.js

npm install -g clinic

# Profile heap allocations over time
clinic heapprofiler -- node server.js

# Then send some traffic and Ctrl+C — clinic generates a report

Manual heap monitoring

import v8 from 'v8';

// Log heap stats periodically to detect gradual growth
setInterval(() => {
  const heap = v8.getHeapStatistics();
  logger.info({
    heapUsedMB: Math.round(heap.used_heap_size / 1024 / 1024),
    heapTotalMB: Math.round(heap.total_heap_size / 1024 / 1024),
    externalMB: Math.round(heap.external_memory / 1024 / 1024),
  }, 'Heap stats');
}, 30_000);

A steadily rising heapUsedMB that never decreases between GC cycles is a leak.


Memory management checklist

CategoryCheck
CachesAll in-memory caches have a size cap and/or TTL
Event listenersEvery on() has a corresponding off() or removeListener() in cleanup paths
TimersAll setInterval / setTimeout references stored and cleared when no longer needed
ClosuresLarge objects captured by closures are necessary — not incidental
StreamsStreams are closed / destroyed on error and completion paths
ObservabilityHeap usage metrics are emitted and monitored in production
AlertsAlert fires when heap usage exceeds a threshold (e.g., 80% of --max-old-space-size)

Container memory limits

Set Node.js heap to slightly below the container memory limit to allow OS and V8 overhead:

# Container memory limit: 512MB
ENV NODE_OPTIONS="--max-old-space-size=400"
# Leaves ~112MB for OS, V8 overhead, native buffers

Without this, Node.js will not request a GC until the OS kills the process with OOM, rather than GC-ing when the heap is full.