diff options
author | Don Brady <[email protected]> | 2023-11-08 11:19:41 -0700 |
---|---|---|
committer | GitHub <[email protected]> | 2023-11-08 10:19:41 -0800 |
commit | 5caeef02fa531238b4554afc977533382e43314f (patch) | |
tree | 71a5e80f437ae81b3225124b98291aa90a0a3a3d /scripts/zloop.sh | |
parent | 9198de8f1079a8bbb837de3e3f8e236777b1375d (diff) |
RAID-Z expansion feature
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally. This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).
== Initiating expansion ==
A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`. The new device will become part of the RAIDZ group. A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.
The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.
== During expansion ==
The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).
The expansion progress can be monitored with `zpool status`.
Data redundancy is maintained during (and after) the expansion. If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).
The pool remains accessible during expansion. Following a reboot or
export/import, the expansion resumes where it left off.
== After expansion ==
When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).
Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).
A RAIDZ vdev can be expanded multiple times.
After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks. New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.
Sponsored-by: The FreeBSD Foundation
Sponsored-by: iXsystems, Inc.
Sponsored-by: vStack
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Mark Maybee <[email protected]>
Authored-by: Matthew Ahrens <[email protected]>
Contributions-by: Fedor Uporov <[email protected]>
Contributions-by: Stuart Maybee <[email protected]>
Contributions-by: Thorsten Behrens <[email protected]>
Contributions-by: Fmstrat <[email protected]>
Contributions-by: Don Brady <[email protected]>
Signed-off-by: Don Brady <[email protected]>
Closes #15022
Diffstat (limited to 'scripts/zloop.sh')
-rwxr-xr-x | scripts/zloop.sh | 63 |
1 files changed, 43 insertions, 20 deletions
diff --git a/scripts/zloop.sh b/scripts/zloop.sh index 83160c34a..7cda23743 100755 --- a/scripts/zloop.sh +++ b/scripts/zloop.sh @@ -252,38 +252,57 @@ while (( timeout == 0 )) || (( curtime <= (starttime + timeout) )); do or_die rm -rf "$workdir" or_die mkdir "$workdir" - # switch between three types of configs - # 1/3 basic, 1/3 raidz mix, and 1/3 draid mix - choice=$((RANDOM % 3)) - # ashift range 9 - 15 align=$(((RANDOM % 2) * 3 + 9)) + # choose parity value + parity=$(((RANDOM % 3) + 1)) + + draid_data=0 + draid_spares=0 + # randomly use special classes class="special=random" - if [[ $choice -eq 0 ]]; then - # basic mirror only - parity=1 + # choose between four types of configs + # (basic, raidz mix, raidz expansion, and draid mix) + case $((RANDOM % 4)) in + + # basic mirror configuration + 0) parity=1 mirrors=2 - draid_data=0 - draid_spares=0 raid_children=0 vdevs=2 raid_type="raidz" - elif [[ $choice -eq 1 ]]; then - # fully randomized mirror/raidz (sans dRAID) - parity=$(((RANDOM % 3) + 1)) - mirrors=$(((RANDOM % 3) * 1)) - draid_data=0 - draid_spares=0 + ;; + + # fully randomized mirror/raidz (sans dRAID) + 1) mirrors=$(((RANDOM % 3) * 1)) raid_children=$((((RANDOM % 9) + parity + 1) * (RANDOM % 2))) vdevs=$(((RANDOM % 3) + 3)) raid_type="raidz" - else - # fully randomized dRAID (sans mirror/raidz) - parity=$(((RANDOM % 3) + 1)) - mirrors=0 + ;; + + # randomized raidz expansion (one top-level raidz vdev) + 2) mirrors=0 + vdevs=1 + # derive initial raidz disk count based on parity choice + # P1: 3 - 7 disks + # P2: 5 - 9 disks + # P3: 7 - 11 disks + raid_children=$(((RANDOM % 5) + (parity * 2) + 1)) + + # 1/3 of the time use a dedicated '-X' raidz expansion test + if [[ $((RANDOM % 3)) -eq 0 ]]; then + zopt="$zopt -X -t 16" + raid_type="raidz" + else + raid_type="eraidz" + fi + ;; + + # fully randomized dRAID (sans mirror/raidz) + 3) mirrors=0 draid_data=$(((RANDOM % 8) + 3)) draid_spares=$(((RANDOM % 2) + parity)) stripe=$((draid_data + parity)) @@ -291,7 +310,11 @@ while (( timeout == 0 )) || (( curtime <= (starttime + timeout) )); do raid_children=$(((((RANDOM % 4) + 1) * stripe) + extra)) vdevs=$((RANDOM % 3)) raid_type="draid" - fi + ;; + *) + # avoid shellcheck SC2249 + ;; + esac zopt="$zopt -K $raid_type" zopt="$zopt -m $mirrors" |