Error Codes Wiki

Container Killed by cgroup Memory Limit — OOMKilled in Kubernetes and Docker

Errordocker

Overview

Fix containers being killed by cgroup memory limits (OOMKilled status) in Docker and Kubernetes when the application exceeds its allocated memory.

Key Details

  • cgroups v2 enforce memory limits on containers — exceeding the limit triggers OOMKilled
  • In Kubernetes: OOMKilled status (exit code 137) means the container exceeded its memory limit
  • Memory limits include the application heap, shared libraries, page cache, and OS overhead
  • Kubernetes has requests (guaranteed minimum) and limits (maximum allowed) for memory
  • Java applications are especially prone to OOMKill because JVM heap is only part of total memory usage

Common Causes

  • Container memory limit set too low for the application's actual memory needs
  • Application memory leak causing gradual growth until the limit is hit
  • JVM heap (-Xmx) set correctly but native memory, thread stacks, or metaspace exceeding the container limit
  • Burst memory usage during startup or peak load exceeding the configured limit

Steps

  1. 1Check OOMKill events: 'kubectl describe pod pod-name' — look for 'OOMKilled' in container status
  2. 2Monitor actual memory usage: 'kubectl top pod pod-name' or check Grafana/Prometheus container memory metrics
  3. 3Increase memory limit if the application genuinely needs more: update deployment spec resources.limits.memory
  4. 4For Java: set JVM flags -XX:MaxRAMPercentage=75 to use 75% of container memory for heap, leaving 25% for overhead
  5. 5Fix memory leaks: profile the application with language-specific tools (jmap for Java, valgrind for C, memory_profiler for Python)

Tags

cgroupoomkilledkubernetesmemory-limitcontainer

Related Items

More in Docker

Frequently Asked Questions

Requests are the guaranteed minimum — Kubernetes reserves this amount on the node. Limits are the maximum — exceeding them triggers OOMKill. Set requests to typical usage and limits to peak usage plus a buffer.