aboutsummaryrefslogtreecommitdiffstats
path: root/man
diff options
context:
space:
mode:
authorAlexander Motin <[email protected]>2022-05-04 14:33:42 -0400
committerBrian Behlendorf <[email protected]>2022-07-26 10:10:37 -0700
commit6e1e90d64cf26d66ff0b5a50fa294be162356ada (patch)
treecba5cd91bf7b816b42ce8d3b5a047faaacc161d0 /man
parentdd9c110ab5d3eb2bf38e2849f66f1534401f605b (diff)
Improve mg_aliquot math
When calculating mg_aliquot alike to #12046 use number of unique data disks in the vdev, not the total number of children vdev. Increase default value of the tunable from 512KB to 1MB to compensate. Before this change each disk in striped pool was getting 512KB of sequential data, in 2-wide mirror -- 1MB, in 3-wide RAIDZ1 -- 768KB. After this change in all the cases each disk should get 1MB. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored-By: iXsystems, Inc. Closes #13388
Diffstat (limited to 'man')
-rw-r--r--man/man4/zfs.46
1 files changed, 3 insertions, 3 deletions
diff --git a/man/man4/zfs.4 b/man/man4/zfs.4
index 5e4587285..692111684 100644
--- a/man/man4/zfs.4
+++ b/man/man4/zfs.4
@@ -213,12 +213,12 @@ For L2ARC devices less than 1GB, the amount of data
evicts is significant compared to the amount of restored L2ARC data.
In this case, do not write log blocks in L2ARC in order not to waste space.
.
-.It Sy metaslab_aliquot Ns = Ns Sy 524288 Ns B Po 512kB Pc Pq ulong
+.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq ulong
Metaslab granularity, in bytes.
This is roughly similar to what would be referred to as the "stripe size"
in traditional RAID arrays.
-In normal operation, ZFS will try to write this amount of data
-to a top-level vdev before moving on to the next one.
+In normal operation, ZFS will try to write this amount of data to each disk
+before moving on to the next top-level vdev.
.
.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
Enable metaslab group biasing based on their vdevs' over- or under-utilization