Kubernetes CrashLoopBackOff — Pod Restart Loop and Container Crash Debugging
About Kubernetes CrashLoopBackOff
Fix Kubernetes CrashLoopBackOff status when pods repeatedly crash and restart, caused by application errors, misconfigured probes, or resource limits. This guide covers everything you need to know about this topic, including common causes, step-by-step solutions, and answers to frequently asked questions.
Here are the key things to understand: CrashLoopBackOff means the container starts, crashes, and Kubernetes keeps restarting it with exponential backoff. The backoff delay increases from 10 seconds up to 5 minutes between restart attempts. Container logs from previous runs are critical for diagnosing the crash cause. Liveness probe failures can kill healthy containers if the probe is misconfigured. OOMKilled status indicates the container exceeded its memory limit. Understanding these fundamentals will help you diagnose and resolve this issue more effectively.
The most common reasons this occurs include: Application crashing on startup due to missing configuration, environment variables, or dependencies. Container exceeding memory limits and being OOMKilled by Kubernetes. Liveness probe failing because the application is slow to start or the probe endpoint is wrong. Image pull errors causing the container to start with the wrong or missing image. Identifying the root cause is the first step toward finding the right solution.
To resolve this, follow these recommended steps: Check pod status and events: 'kubectl describe pod [pod-name] -n [namespace]' — look at Events section. View container logs: 'kubectl logs [pod-name] -n [namespace] --previous' to see logs from the crashed container. Check resource limits: 'kubectl get pod [pod-name] -o yaml' — verify memory/CPU limits are sufficient. If OOMKilled: increase memory limit in the deployment spec or fix the application's memory leak. If probe failure: increase initialDelaySeconds in liveness probe or fix the health check endpoint. If these steps do not resolve the issue, consider consulting additional resources or a qualified professional.
This article is part of our Linux Error Codes collection on Error Codes Wiki. We provide comprehensive, up-to-date information to help you find solutions quickly.
Quick Answer
What is the difference between CrashLoopBackOff and Error?
Error means the container exited with a non-zero code once. CrashLoopBackOff means it has crashed multiple times and Kubernetes is applying exponential backoff between restarts.
Overview
Fix Kubernetes CrashLoopBackOff status when pods repeatedly crash and restart, caused by application errors, misconfigured probes, or resource limits.
Key Details
- CrashLoopBackOff means the container starts, crashes, and Kubernetes keeps restarting it with exponential backoff
- The backoff delay increases from 10 seconds up to 5 minutes between restart attempts
- Container logs from previous runs are critical for diagnosing the crash cause
- Liveness probe failures can kill healthy containers if the probe is misconfigured
- OOMKilled status indicates the container exceeded its memory limit
Common Causes
- Application crashing on startup due to missing configuration, environment variables, or dependencies
- Container exceeding memory limits and being OOMKilled by Kubernetes
- Liveness probe failing because the application is slow to start or the probe endpoint is wrong
- Image pull errors causing the container to start with the wrong or missing image
Steps
- 1Check pod status and events: 'kubectl describe pod [pod-name] -n [namespace]' — look at Events section
- 2View container logs: 'kubectl logs [pod-name] -n [namespace] --previous' to see logs from the crashed container
- 3Check resource limits: 'kubectl get pod [pod-name] -o yaml' — verify memory/CPU limits are sufficient
- 4If OOMKilled: increase memory limit in the deployment spec or fix the application's memory leak
- 5If probe failure: increase initialDelaySeconds in liveness probe or fix the health check endpoint