Error Codes Wiki

Linux Disk I/O Performance Errors — High iowait, Slow Disk, and Diagnostics

Warningfilesystem

About Linux Disk I/O Performance Errors

Diagnose and fix Linux disk I/O performance issues including high iowait, slow disk operations, I/O scheduler tuning, and identifying I/O bottlenecks. This guide covers everything you need to know about this topic, including common causes, step-by-step solutions, and answers to frequently asked questions.

Here are the key things to understand: High iowait in top/htop means the CPU is idle waiting for disk I/O to complete. iostat shows per-disk read/write rates, queue depth, and latency metrics. I/O scheduler affects performance: none/noop for SSDs, mq-deadline for HDDs. iotop shows per-process I/O usage (like top for disk instead of CPU). NVMe SSDs handle many thousands of IOPS; HDDs are limited to ~100-200 IOPS. Understanding these fundamentals will help you diagnose and resolve this issue more effectively.

The most common reasons this occurs include: Application generating excessive random I/O on a spinning hard drive. Multiple processes competing for disk I/O (database + logging + backups simultaneously). Wrong I/O scheduler for the disk type (using CFQ on SSD, or deadline on NVMe). Filesystem nearly full causing fragmentation and slow allocation. Hardware degradation: failing disk, bad SATA cable, or controller issue. Identifying the root cause is the first step toward finding the right solution.

To resolve this, follow these recommended steps: Check iowait: top (look for %wa in the CPU line) or vmstat 1 (wa column). Identify I/O-heavy processes: sudo iotop -oP to show only processes doing I/O. Check disk stats: iostat -xz 1 to see utilization, queue depth, and latency per disk. Change I/O scheduler: echo none > /sys/block/sda/queue/scheduler (for SSDs). Check disk health: smartctl -a /dev/sda (look for Reallocated Sectors, Pending Sectors). Reduce I/O: schedule backups and heavy operations during off-peak hours. If these steps do not resolve the issue, consider consulting additional resources or a qualified professional.

This article is part of our Linux Error Codes collection on Error Codes Wiki. We provide comprehensive, up-to-date information to help you find solutions quickly.

Quick Answer

What is a good iowait percentage?

Below 10% is generally acceptable. Sustained iowait above 20% indicates an I/O bottleneck. On SSDs, iowait should be very low. High iowait on SSDs suggests a software issue.

Overview

Diagnose and fix Linux disk I/O performance issues including high iowait, slow disk operations, I/O scheduler tuning, and identifying I/O bottlenecks.

Key Details

  • High iowait in top/htop means the CPU is idle waiting for disk I/O to complete
  • iostat shows per-disk read/write rates, queue depth, and latency metrics
  • I/O scheduler affects performance: none/noop for SSDs, mq-deadline for HDDs
  • iotop shows per-process I/O usage (like top for disk instead of CPU)
  • NVMe SSDs handle many thousands of IOPS; HDDs are limited to ~100-200 IOPS

Common Causes

  • Application generating excessive random I/O on a spinning hard drive
  • Multiple processes competing for disk I/O (database + logging + backups simultaneously)
  • Wrong I/O scheduler for the disk type (using CFQ on SSD, or deadline on NVMe)
  • Filesystem nearly full causing fragmentation and slow allocation
  • Hardware degradation: failing disk, bad SATA cable, or controller issue

Steps

  1. 1Check iowait: top (look for %wa in the CPU line) or vmstat 1 (wa column)
  2. 2Identify I/O-heavy processes: sudo iotop -oP to show only processes doing I/O
  3. 3Check disk stats: iostat -xz 1 to see utilization, queue depth, and latency per disk
  4. 4Change I/O scheduler: echo none > /sys/block/sda/queue/scheduler (for SSDs)
  5. 5Check disk health: smartctl -a /dev/sda (look for Reallocated Sectors, Pending Sectors)
  6. 6Reduce I/O: schedule backups and heavy operations during off-peak hours

Tags

disk-ioiowaitiostatiotopperformance

Related Items

More in Filesystem

Frequently Asked Questions

Below 10% is generally acceptable. Sustained iowait above 20% indicates an I/O bottleneck. On SSDs, iowait should be very low. High iowait on SSDs suggests a software issue.