Container Killed by cgroup Memory Limit — OOMKilled in Kubernetes and Docker
About Container Killed by cgroup Memory Limit
Fix containers being killed by cgroup memory limits (OOMKilled status) in Docker and Kubernetes when the application exceeds its allocated memory. This guide covers everything you need to know about this topic, including common causes, step-by-step solutions, and answers to frequently asked questions.
Here are the key things to understand: cgroups v2 enforce memory limits on containers — exceeding the limit triggers OOMKilled. In Kubernetes: OOMKilled status (exit code 137) means the container exceeded its memory limit. Memory limits include the application heap, shared libraries, page cache, and OS overhead. Kubernetes has requests (guaranteed minimum) and limits (maximum allowed) for memory. Java applications are especially prone to OOMKill because JVM heap is only part of total memory usage. Understanding these fundamentals will help you diagnose and resolve this issue more effectively.
The most common reasons this occurs include: Container memory limit set too low for the application's actual memory needs. Application memory leak causing gradual growth until the limit is hit. JVM heap (-Xmx) set correctly but native memory, thread stacks, or metaspace exceeding the container limit. Burst memory usage during startup or peak load exceeding the configured limit. Identifying the root cause is the first step toward finding the right solution.
To resolve this, follow these recommended steps: Check OOMKill events: 'kubectl describe pod pod-name' — look for 'OOMKilled' in container status. Monitor actual memory usage: 'kubectl top pod pod-name' or check Grafana/Prometheus container memory metrics. Increase memory limit if the application genuinely needs more: update deployment spec resources.limits.memory. For Java: set JVM flags -XX:MaxRAMPercentage=75 to use 75% of container memory for heap, leaving 25% for overhead. Fix memory leaks: profile the application with language-specific tools (jmap for Java, valgrind for C, memory_profiler for Python). If these steps do not resolve the issue, consider consulting additional resources or a qualified professional.
This article is part of our Linux Error Codes collection on Error Codes Wiki. We provide comprehensive, up-to-date information to help you find solutions quickly.
Quick Answer
What is the difference between memory requests and limits?
Requests are the guaranteed minimum — Kubernetes reserves this amount on the node. Limits are the maximum — exceeding them triggers OOMKill. Set requests to typical usage and limits to peak usage plus a buffer.
Overview
Fix containers being killed by cgroup memory limits (OOMKilled status) in Docker and Kubernetes when the application exceeds its allocated memory.
Key Details
- cgroups v2 enforce memory limits on containers — exceeding the limit triggers OOMKilled
- In Kubernetes: OOMKilled status (exit code 137) means the container exceeded its memory limit
- Memory limits include the application heap, shared libraries, page cache, and OS overhead
- Kubernetes has requests (guaranteed minimum) and limits (maximum allowed) for memory
- Java applications are especially prone to OOMKill because JVM heap is only part of total memory usage
Common Causes
- Container memory limit set too low for the application's actual memory needs
- Application memory leak causing gradual growth until the limit is hit
- JVM heap (-Xmx) set correctly but native memory, thread stacks, or metaspace exceeding the container limit
- Burst memory usage during startup or peak load exceeding the configured limit
Steps
- 1Check OOMKill events: 'kubectl describe pod pod-name' — look for 'OOMKilled' in container status
- 2Monitor actual memory usage: 'kubectl top pod pod-name' or check Grafana/Prometheus container memory metrics
- 3Increase memory limit if the application genuinely needs more: update deployment spec resources.limits.memory
- 4For Java: set JVM flags -XX:MaxRAMPercentage=75 to use 75% of container memory for heap, leaving 25% for overhead
- 5Fix memory leaks: profile the application with language-specific tools (jmap for Java, valgrind for C, memory_profiler for Python)