diff options
author | Alexander Motin <[email protected]> | 2020-11-24 12:26:42 -0500 |
---|---|---|
committer | GitHub <[email protected]> | 2020-11-24 09:26:42 -0800 |
commit | 6f5aac3ca057731d106b801bfdf060119571393c (patch) | |
tree | 582eda8d1a60a1c4f1dd7f09a42a35e10db72778 /man/man5 | |
parent | f67bebbc3446d0f350bb481092519303b99ee2da (diff) |
Reduce latency effects of non-interactive I/O
Investigating influence of scrub (especially sequential) on random read
latency I've noticed that on some HDDs single 4KB read may take up to 4
seconds! Deeper investigation shown that many HDDs heavily prioritize
sequential reads even when those are submitted with queue depth of 1.
This patch addresses the latency from two sides:
- by using _min_active queue depths for non-interactive requests while
the interactive request(s) are active and few requests after;
- by throttling it further if no interactive requests has completed
while configured amount of non-interactive did.
While there, I've also modified vdev_queue_class_to_issue() to give
more chances to schedule at least _min_active requests to the lowest
priorities. It should reduce starvation if several non-interactive
processes are running same time with some interactive and I think should
make possible setting of zfs_vdev_max_active to as low as 1.
I've benchmarked this change with 4KB random reads from ZVOL with 16KB
block size on newly written non-fragmented pool. On fragmented pool I
also saw improvements, but not so dramatic. Below are log2 histograms
of the random read latency in milliseconds for different devices:
4 2x mirror vdevs of SATA HDD WDC WD20EFRX-68EUZN0 before:
0, 0, 2, 1, 12, 21, 19, 18, 10, 15, 17, 21
after:
0, 0, 0, 24, 101, 195, 419, 250, 47, 4, 0, 0
, that means maximum latency reduction from 2s to 500ms.
4 2x mirror vdevs of SATA HDD WDC WD80EFZX-68UW8N0 before:
0, 0, 2, 31, 38, 28, 18, 12, 17, 20, 24, 10, 3
after:
0, 0, 55, 247, 455, 470, 412, 181, 36, 0, 0, 0, 0
, i.e. from 4s to 250ms.
1 SAS HDD SEAGATE ST14000NM0048 before:
0, 0, 29, 70, 107, 45, 27, 1, 0, 0, 1, 4, 19
after:
1, 29, 681, 1261, 676, 1633, 67, 1, 0, 0, 0, 0, 0
, i.e. from 4s to 125ms.
1 SAS SSD SEAGATE XS3840TE70014 before (microseconds):
0, 0, 0, 0, 0, 0, 0, 0, 70, 18343, 82548, 618
after:
0, 0, 0, 0, 0, 0, 0, 0, 283, 92351, 34844, 90
I've also measured scrub time during the test and on idle pools. On
idle fragmented pool I've measured scrub getting few percent faster
due to use of QD3 instead of QD2 before. On idle non-fragmented pool
I've measured no difference. On busy non-fragmented pool I've measured
scrub time increase about 1.5-1.7x, while IOPS increase reached 5-9x.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Matthew Ahrens <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Alexander Motin <[email protected]>
Sponsored-By: iXsystems, Inc.
Closes #11166
Diffstat (limited to 'man/man5')
-rw-r--r-- | man/man5/zfs-module-parameters.5 | 39 |
1 files changed, 37 insertions, 2 deletions
diff --git a/man/man5/zfs-module-parameters.5 b/man/man5/zfs-module-parameters.5 index c6c40cf80..688974f17 100644 --- a/man/man5/zfs-module-parameters.5 +++ b/man/man5/zfs-module-parameters.5 @@ -2029,8 +2029,7 @@ Default value: \fB1\fR. .ad .RS 12n The maximum number of I/Os active to each device. Ideally, this will be >= -the sum of each queue's max_active. It must be at least the sum of each -queue's min_active. See the section "ZFS I/O SCHEDULER". +the sum of each queue's max_active. See the section "ZFS I/O SCHEDULER". .sp Default value: \fB1,000\fR. .RE @@ -2182,6 +2181,42 @@ Default value: \fB1\fR. .sp .ne 2 .na +\fBzfs_vdev_nia_delay\fR (int) +.ad +.RS 12n +For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), +the number of concurrently-active I/O's is limited to *_min_active, unless +the vdev is "idle". When there are no interactive I/Os active (sync or +async), and zfs_vdev_nia_delay I/Os have completed since the last +interactive I/O, then the vdev is considered to be "idle", and the number +of concurrently-active non-interactive I/O's is increased to *_max_active. +See the section "ZFS I/O SCHEDULER". +.sp +Default value: \fB5\fR. +.RE + +.sp +.ne 2 +.na +\fBzfs_vdev_nia_credit\fR (int) +.ad +.RS 12n +Some HDDs tend to prioritize sequential I/O so high, that concurrent +random I/O latency reaches several seconds. On some HDDs it happens +even if sequential I/Os are submitted one at a time, and so setting +*_max_active to 1 does not help. To prevent non-interactive I/Os, like +scrub, from monopolizing the device no more than zfs_vdev_nia_credit +I/Os can be sent while there are outstanding incomplete interactive +I/Os. This enforced wait ensures the HDD services the interactive I/O +within a reasonable amount of time. +See the section "ZFS I/O SCHEDULER". +.sp +Default value: \fB5\fR. +.RE + +.sp +.ne 2 +.na \fBzfs_vdev_queue_depth_pct\fR (int) .ad .RS 12n |