Error Codes Wiki

Linux Disk I/O Performance Errors — High iowait, Slow Disk, and Diagnostics

Warningfilesystem

Overview

Diagnose and fix Linux disk I/O performance issues including high iowait, slow disk operations, I/O scheduler tuning, and identifying I/O bottlenecks.

Key Details

  • High iowait in top/htop means the CPU is idle waiting for disk I/O to complete
  • iostat shows per-disk read/write rates, queue depth, and latency metrics
  • I/O scheduler affects performance: none/noop for SSDs, mq-deadline for HDDs
  • iotop shows per-process I/O usage (like top for disk instead of CPU)
  • NVMe SSDs handle many thousands of IOPS; HDDs are limited to ~100-200 IOPS

Common Causes

  • Application generating excessive random I/O on a spinning hard drive
  • Multiple processes competing for disk I/O (database + logging + backups simultaneously)
  • Wrong I/O scheduler for the disk type (using CFQ on SSD, or deadline on NVMe)
  • Filesystem nearly full causing fragmentation and slow allocation
  • Hardware degradation: failing disk, bad SATA cable, or controller issue

Steps

  1. 1Check iowait: top (look for %wa in the CPU line) or vmstat 1 (wa column)
  2. 2Identify I/O-heavy processes: sudo iotop -oP to show only processes doing I/O
  3. 3Check disk stats: iostat -xz 1 to see utilization, queue depth, and latency per disk
  4. 4Change I/O scheduler: echo none > /sys/block/sda/queue/scheduler (for SSDs)
  5. 5Check disk health: smartctl -a /dev/sda (look for Reallocated Sectors, Pending Sectors)
  6. 6Reduce I/O: schedule backups and heavy operations during off-peak hours

Tags

disk-ioiowaitiostatiotopperformance

Related Items

More in Filesystem

Frequently Asked Questions

Below 10% is generally acceptable. Sustained iowait above 20% indicates an I/O bottleneck. On SSDs, iowait should be very low. High iowait on SSDs suggests a software issue.