diff options
author | Brian Behlendorf <[email protected]> | 2018-07-23 15:40:15 -0700 |
---|---|---|
committer | GitHub <[email protected]> | 2018-07-23 15:40:15 -0700 |
commit | d441e85dd754ecc15659322b4d36796cbd3838de (patch) | |
tree | 3b5adc51a6bda08c513edd382769cade243bb0ca /tests | |
parent | 2e5dc449c1a65e0b0bf730fd69c9b5804bd57ee8 (diff) |
Add support for autoexpand property
While the autoexpand property may seem like a small feature it
depends on a significant amount of system infrastructure. Enough
of that infrastructure is now in place that with a few modifications
for Linux it can be supported.
Auto-expand works as follows; when a block device is modified
(re-sized, closed after being open r/w, etc) a change uevent is
generated for udev. The ZED, which is monitoring udev events,
passes the change event along to zfs_deliver_dle() if the disk
or partition contains a zfs_member as identified by blkid.
From here the device is matched against all imported pool vdevs
using the vdev_guid which was read from the label by blkid. If
a match is found the ZED reopens the pool vdev. This re-opening
is important because it allows the vdev to be briefly closed so
the disk partition table can be re-read. Otherwise, it wouldn't
be possible to report the maximum possible expansion size.
Finally, if the property autoexpand=on a vdev expansion will be
attempted. After performing some sanity checks on the disk to
verify that it is safe to expand, the primary partition (-part1)
will be expanded and the partition table updated. The partition
is then re-opened (again) to detect the updated size which allows
the new capacity to be used.
In order to make all of the above possible the following changes
were required:
* Updated the zpool_expand_001_pos and zpool_expand_003_pos tests.
These tests now create a pool which is layered on a loopback,
scsi_debug, and file vdev. This allows for testing of non-
partitioned block device (loopback), a partition block device
(scsi_debug), and a file which does not receive udev change
events. This provided for better test coverage, and by removing
the layering on ZFS volumes there issues surrounding layering
one pool on another are avoided.
* zpool_find_vdev_by_physpath() updated to accept a vdev guid.
This allows for matching by guid rather than path which is a
more reliable way for the ZED to reference a vdev.
* Fixed zfs_zevent_wait() signal handling which could result
in the ZED spinning when a signal was not handled.
* Removed vdev_disk_rrpart() functionality which can be abandoned
in favor of kernel provided blkdev_reread_part() function.
* Added a rwlock which is held as a writer while a disk is being
reopened. This is important to prevent errors from occurring
for any configuration related IOs which bypass the SCL_ZIO lock.
The zpool_reopen_007_pos.ksh test case was added to verify IO
error are never observed when reopening. This is not expected
to impact IO performance.
Additional fixes which aren't critical but were discovered and
resolved in the course of developing this functionality.
* Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for
ZFS volumes. This is as good as a unique physical path, while the
volumes are not used in the test cases anymore for other reasons
this improvement was included.
Reviewed by: Richard Elling <[email protected]>
Signed-off-by: Sara Hartse <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #120
Closes #2437
Closes #5771
Closes #7366
Closes #7582
Closes #7629
Diffstat (limited to 'tests')
14 files changed, 342 insertions, 130 deletions
diff --git a/tests/runfiles/linux.run b/tests/runfiles/linux.run index 056b1dddb..89563189f 100644 --- a/tests/runfiles/linux.run +++ b/tests/runfiles/linux.run @@ -333,7 +333,7 @@ tags = ['functional', 'cli_root', 'zpool_events'] [tests/functional/cli_root/zpool_expand] tests = ['zpool_expand_001_pos', 'zpool_expand_002_pos', - 'zpool_expand_003_neg', 'zpool_expand_004_pos'] + 'zpool_expand_003_neg', 'zpool_expand_004_pos', 'zpool_expand_005_pos'] tags = ['functional', 'cli_root', 'zpool_expand'] [tests/functional/cli_root/zpool_export] @@ -398,7 +398,7 @@ tags = ['functional', 'cli_root', 'zpool_remove'] [tests/functional/cli_root/zpool_reopen] tests = ['zpool_reopen_001_pos', 'zpool_reopen_002_pos', 'zpool_reopen_003_pos', 'zpool_reopen_004_pos', 'zpool_reopen_005_pos', - 'zpool_reopen_006_neg'] + 'zpool_reopen_006_neg', 'zpool_reopen_007_pos'] tags = ['functional', 'cli_root', 'zpool_reopen'] [tests/functional/cli_root/zpool_replace] diff --git a/tests/test-runner/bin/zts-report.py b/tests/test-runner/bin/zts-report.py index 20afad5d7..804d7d607 100755 --- a/tests/test-runner/bin/zts-report.py +++ b/tests/test-runner/bin/zts-report.py @@ -82,6 +82,13 @@ python_deps_reason = 'Python modules missing: python-cffi' tmpfile_reason = 'Kernel O_TMPFILE support required' # +# Some tests may depend on udev change events being generated when block +# devices change capacity. This functionality wasn't available until the +# 2.6.38 kernel. +# +udev_reason = 'Kernel block device udev change events required' + +# # Some tests require that the NFS client and server utilities be installed. # share_reason = 'NFS client and server utilities required' @@ -159,8 +166,6 @@ known = { 'cli_root/zfs_unshare/zfs_unshare_002_pos': ['SKIP', na_reason], 'cli_root/zfs_unshare/zfs_unshare_006_pos': ['SKIP', na_reason], 'cli_root/zpool_create/zpool_create_016_pos': ['SKIP', na_reason], - 'cli_root/zpool_expand/zpool_expand_001_pos': ['SKIP', '5771'], - 'cli_root/zpool_expand/zpool_expand_003_neg': ['SKIP', '5771'], 'cli_user/misc/zfs_share_001_neg': ['SKIP', na_reason], 'cli_user/misc/zfs_unshare_001_neg': ['SKIP', na_reason], 'inuse/inuse_001_pos': ['SKIP', na_reason], @@ -219,6 +224,7 @@ maybe = { 'cli_root/zpool_create/setup': ['SKIP', disk_reason], 'cli_root/zpool_create/zpool_create_008_pos': ['FAIL', known_reason], 'cli_root/zpool_destroy/zpool_destroy_001_pos': ['SKIP', '6145'], + 'cli_root/zpool_expand/setup': ['SKIP', udev_reason], 'cli_root/zpool_export/setup': ['SKIP', disk_reason], 'cli_root/zpool_import/setup': ['SKIP', disk_reason], 'cli_root/zpool_import/import_rewind_device_replaced': diff --git a/tests/zfs-tests/include/blkdev.shlib b/tests/zfs-tests/include/blkdev.shlib index 5163ea2ae..9cac7184f 100644 --- a/tests/zfs-tests/include/blkdev.shlib +++ b/tests/zfs-tests/include/blkdev.shlib @@ -312,6 +312,7 @@ function on_off_disk # disk state{online,offline} host log_fail "Onlining $disk failed" fi elif is_real_device $disk; then + block_device_wait typeset -i retries=0 while ! lsscsi | egrep -q $disk; do if (( $retries > 2 )); then @@ -410,9 +411,7 @@ function load_scsi_debug # dev_size_mb add_host num_tgts max_luns blksz # function unload_scsi_debug { - if lsmod | grep scsi_debug >/dev/null; then - log_must modprobe -r scsi_debug - fi + log_must_retry "in use" 5 modprobe -r scsi_debug } # diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/Makefile.am b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/Makefile.am index 2fae015b5..beaa411e3 100644 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/Makefile.am +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/Makefile.am @@ -5,7 +5,8 @@ dist_pkgdata_SCRIPTS = \ zpool_expand_001_pos.ksh \ zpool_expand_002_pos.ksh \ zpool_expand_003_neg.ksh \ - zpool_expand_004_pos.ksh + zpool_expand_004_pos.ksh \ + zpool_expand_005_pos.ksh dist_pkgdata_DATA = \ zpool_expand.cfg diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/setup.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/setup.ksh index 7d6a43ef5..9832a441c 100755 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/setup.ksh +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/setup.ksh @@ -29,6 +29,15 @@ verify_runnable "global" +# +# The pool expansion tests depend on udev change events being generated +# when block devices change capacity. Since this functionality wasn't +# available until the 2.6.38 kernel skip this test group. +# +if [[ $(linux_version) -lt $(linux_version "2.6.38") ]]; then + log_unsupported "Requires block device udev change events" +fi + zed_setup zed_start diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand.cfg b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand.cfg index e15471e22..bec5fb163 100644 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand.cfg +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand.cfg @@ -29,7 +29,9 @@ # -export org_size=$MINVDEVSIZE -export exp_size=$((2*$org_size)) +export org_size=$((1024*1024*1024)) +export exp_size=$((2*1024*1024*1024)) +export org_size_mb=$((org_size/(1024*1024))) -export VFS=$TESTPOOL/$TESTFS +export FILE_LO=$TEST_BASE_DIR/vdev_lo +export FILE_RAW=$TEST_BASE_DIR/vdev_raw diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_001_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_001_pos.ksh index 06ab1b84f..289e3e33f 100755 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_001_pos.ksh +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_001_pos.ksh @@ -27,6 +27,7 @@ # # Copyright (c) 2012, 2016 by Delphix. All rights reserved. +# Copyright (c) 2018 by Lawrence Livermore National Security, LLC. # . $STF_SUITE/include/libtest.shlib @@ -35,68 +36,85 @@ # # DESCRIPTION: # Once zpool set autoexpand=on poolname, zpool can autoexpand by -# Dynamic LUN Expansion +# Dynamic VDEV Expansion # # # STRATEGY: -# 1) Create a pool -# 2) Create volume on top of the pool -# 3) Create pool by using the zvols and set autoexpand=on -# 4) Expand the vol size by 'zfs set volsize' -# 5) Check that the pool size was expanded +# 1) Create three vdevs (loopback, scsi_debug, and file) +# 2) Create pool by using the different devices and set autoexpand=on +# 3) Expand each device as appropriate +# 4) Check that the pool size was expanded +# +# NOTE: Three different device types are used in this test to verify +# expansion of non-partitioned block devices (loopback), partitioned +# block devices (scsi_debug), and non-disk file vdevs. ZFS volumes +# are not used in order to avoid a possible lock inversion when +# layering pools on zvols. # verify_runnable "global" -# See issue: https://github.com/zfsonlinux/zfs/issues/5771 -if is_linux; then - log_unsupported "Requires autoexpand property support" -fi - function cleanup { - if poolexists $TESTPOOL1; then - log_must zpool destroy $TESTPOOL1 + poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1 + + if losetup -a | grep -q $DEV1; then + losetup -d $DEV1 fi - for i in 1 2 3; do - if datasetexists $VFS/vol$i; then - log_must zfs destroy $VFS/vol$i - fi - done + rm -f $FILE_LO $FILE_RAW + + block_device_wait + unload_scsi_debug } log_onexit cleanup -log_assert "zpool can be autoexpanded after set autoexpand=on on LUN expansion" - -for i in 1 2 3; do - log_must zfs create -V $org_size $VFS/vol$i -done -block_device_wait +log_assert "zpool can be autoexpanded after set autoexpand=on on vdev expansion" for type in " " mirror raidz raidz2; do + log_note "Setting up loopback, scsi_debug, and file vdevs" + log_must truncate -s $org_size $FILE_LO + DEV1=$(losetup -f) + log_must losetup $DEV1 $FILE_LO + + load_scsi_debug $org_size_mb 1 1 1 '512b' + block_device_wait + DEV2=$(get_debug_device) + + log_must truncate -s $org_size $FILE_RAW + DEV3=$FILE_RAW - log_must zpool create -o autoexpand=on $TESTPOOL1 $type \ - ${ZVOL_DEVDIR}/$VFS/vol1 ${ZVOL_DEVDIR}/$VFS/vol2 \ - ${ZVOL_DEVDIR}/$VFS/vol3 + # The -f is required since we're mixing disk and file vdevs. + log_must zpool create -f -o autoexpand=on $TESTPOOL1 $type \ + $DEV1 $DEV2 $DEV3 typeset autoexp=$(get_pool_prop autoexpand $TESTPOOL1) if [[ $autoexp != "on" ]]; then - log_fail "zpool $TESTPOOL1 autoexpand should on but is $autoexp" + log_fail "zpool $TESTPOOL1 autoexpand should be on but is " \ + "$autoexp" fi typeset prev_size=$(get_pool_prop size $TESTPOOL1) typeset zfs_prev_size=$(zfs get -p avail $TESTPOOL1 | tail -1 | \ awk '{print $3}') - for i in 1 2 3; do - log_must zfs set volsize=$exp_size $VFS/vol$i - done + # Expand each device as appropriate being careful to add an artificial + # delay to ensure we get a single history entry for each. This makes + # is easier to verify each expansion for the striped pool case, since + # they will not be merged in to a single larger expansion. + log_note "Expanding loopback, scsi_debug, and file vdevs" + log_must truncate -s $exp_size $FILE_LO + log_must losetup -c $DEV1 + sleep 3 - sync - sleep 10 - sync + echo "2" > /sys/bus/pseudo/drivers/scsi_debug/virtual_gb + echo "1" > /sys/class/block/$DEV2/device/rescan + block_device_wait + sleep 3 + + log_must truncate -s $exp_size $FILE_RAW + log_must zpool online -e $TESTPOOL1 $FILE_RAW typeset expand_size=$(get_pool_prop size $TESTPOOL1) typeset zfs_expand_size=$(zfs get -p avail $TESTPOOL1 | tail -1 | \ @@ -105,8 +123,8 @@ for type in " " mirror raidz raidz2; do log_note "$TESTPOOL1 $type has previous size: $prev_size and " \ "expanded size: $expand_size" # compare available pool size from zfs - if [[ $zfs_expand_size > $zfs_prev_size ]]; then - # check for zpool history for the pool size expansion + if [[ $zfs_expand_size -gt $zfs_prev_size ]]; then + # check for zpool history for the pool size expansion if [[ $type == " " ]]; then typeset expansion_size=$(($exp_size-$org_size)) typeset size_addition=$(zpool history -il $TESTPOOL1 |\ @@ -114,9 +132,9 @@ for type in " " mirror raidz raidz2; do grep "vdev online" | \ grep "(+${expansion_size}" | wc -l) - if [[ $size_addition -ne $i ]]; then - log_fail "pool $TESTPOOL1 is not autoexpand " \ - "after LUN expansion" + if [[ $size_addition -ne 3 ]]; then + log_fail "pool $TESTPOOL1 has not expanded, " \ + "$size_addition/3 vdevs expanded" fi elif [[ $type == "mirror" ]]; then typeset expansion_size=$(($exp_size-$org_size)) @@ -126,8 +144,7 @@ for type in " " mirror raidz raidz2; do grep "(+${expansion_size})" >/dev/null 2>&1 if [[ $? -ne 0 ]] ; then - log_fail "pool $TESTPOOL1 is not autoexpand " \ - "after LUN expansion" + log_fail "pool $TESTPOOL1 has not expanded" fi else typeset expansion_size=$((3*($exp_size-$org_size))) @@ -137,19 +154,16 @@ for type in " " mirror raidz raidz2; do grep "(+${expansion_size})" >/dev/null 2>&1 if [[ $? -ne 0 ]]; then - log_fail "pool $TESTPOOL is not autoexpand " \ - "after LUN expansion" + log_fail "pool $TESTPOOL has not expanded" fi fi else - log_fail "pool $TESTPOOL1 is not autoexpanded after LUN " \ - "expansion" + log_fail "pool $TESTPOOL1 is not autoexpanded after vdev " \ + "expansion. Previous size: $zfs_prev_size and expanded " \ + "size: $zfs_expand_size" fi - log_must zpool destroy $TESTPOOL1 - for i in 1 2 3; do - log_must zfs set volsize=$org_size $VFS/vol$i - done - + cleanup done -log_pass "zpool can be autoexpanded after set autoexpand=on on LUN expansion" + +log_pass "zpool can autoexpand if autoexpand=on after vdev expansion" diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_002_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_002_pos.ksh index 66b6969db..a49d4fc17 100755 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_002_pos.ksh +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_002_pos.ksh @@ -36,7 +36,7 @@ # # DESCRIPTION: # After zpool online -e poolname zvol vdevs, zpool can autoexpand by -# Dynamic LUN Expansion +# Dynamic VDEV Expansion # # # STRATEGY: @@ -52,9 +52,7 @@ verify_runnable "global" function cleanup { - if poolexists $TESTPOOL1; then - log_must zpool destroy $TESTPOOL1 - fi + poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1 for i in 1 2 3; do [ -e ${TEMPFILE}.$i ] && log_must rm ${TEMPFILE}.$i @@ -63,7 +61,7 @@ function cleanup log_onexit cleanup -log_assert "zpool can expand after zpool online -e zvol vdevs on LUN expansion" +log_assert "zpool can expand after zpool online -e zvol vdevs on vdev expansion" for type in " " mirror raidz raidz2; do # Initialize the file devices and the pool @@ -77,7 +75,7 @@ for type in " " mirror raidz raidz2; do typeset autoexp=$(get_pool_prop autoexpand $TESTPOOL1) if [[ $autoexp != "off" ]]; then - log_fail "zpool $TESTPOOL1 autoexpand should off but is " \ + log_fail "zpool $TESTPOOL1 autoexpand should be off but is " \ "$autoexp" fi typeset prev_size=$(get_pool_prop size $TESTPOOL1) @@ -109,15 +107,15 @@ for type in " " mirror raidz raidz2; do "expected $expected_zpool_expandsize" fi - # Online the devices to add the new space to the pool + # Online the devices to add the new space to the pool. Add an + # artificial delay between online commands order to prevent them + # from being merged in to a single history entry. This makes + # is easier to verify each expansion for the striped pool case. for i in 1 2 3; do log_must zpool online -e $TESTPOOL1 ${TEMPFILE}.$i + sleep 3 done - sync - sleep 10 - sync - typeset expand_size=$(get_pool_prop size $TESTPOOL1) typeset zfs_expand_size=$(get_prop avail $TESTPOOL1) log_note "$TESTPOOL1 $type has previous size: $prev_size and " \ @@ -134,8 +132,9 @@ for type in " " mirror raidz raidz2; do grep "(+${expansion_size}" | wc -l) if [[ $size_addition -ne $i ]]; then - log_fail "pool $TESTPOOL1 did not expand " \ - "after LUN expansion and zpool online -e" + log_fail "pool $TESTPOOL1 has not expanded " \ + "after zpool online -e, " \ + "$size_addition/3 vdevs expanded" fi elif [[ $type == "mirror" ]]; then typeset expansion_size=$(($exp_size-$org_size)) @@ -145,8 +144,8 @@ for type in " " mirror raidz raidz2; do grep "(+${expansion_size})" >/dev/null 2>&1 if [[ $? -ne 0 ]]; then - log_fail "pool $TESTPOOL1 did not expand " \ - "after LUN expansion and zpool online -e" + log_fail "pool $TESTPOOL1 has not expanded " \ + "after zpool online -e" fi else typeset expansion_size=$((3*($exp_size-$org_size))) @@ -156,14 +155,14 @@ for type in " " mirror raidz raidz2; do grep "(+${expansion_size})" >/dev/null 2>&1 if [[ $? -ne 0 ]] ; then - log_fail "pool $TESTPOOL1 did not expand " \ - "after LUN expansion and zpool online -e" + log_fail "pool $TESTPOOL1 has not expanded " \ + "after zpool online -e" fi fi else - log_fail "pool $TESTPOOL1 did not expand after LUN expansion " \ + log_fail "pool $TESTPOOL1 did not expand after vdev expansion " \ "and zpool online -e" fi log_must zpool destroy $TESTPOOL1 done -log_pass "zpool can expand after zpool online -e zvol vdevs on LUN expansion" +log_pass "zpool can expand after zpool online -e" diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_003_neg.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_003_neg.ksh index 585dd050f..323d0b907 100755 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_003_neg.ksh +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_003_neg.ksh @@ -27,95 +27,112 @@ # # Copyright (c) 2012, 2016 by Delphix. All rights reserved. +# Copyright (c) 2018 by Lawrence Livermore National Security, LLC. # + . $STF_SUITE/include/libtest.shlib . $STF_SUITE/tests/functional/cli_root/zpool_expand/zpool_expand.cfg # # Description: # Once set zpool autoexpand=off, zpool can *NOT* autoexpand by -# Dynamic LUN Expansion +# Dynamic VDEV Expansion # # # STRATEGY: -# 1) Create a pool -# 2) Create volumes on top of the pool -# 3) Create pool by using the zvols and set autoexpand=off -# 4) Expand the vol size by zfs set volsize -# 5) Check that the pool size is not changed +# 1) Create three vdevs (loopback, scsi_debug, and file) +# 2) Create pool by using the different devices and set autoexpand=off +# 3) Expand each device as appropriate +# 4) Check that the pool size is not expanded +# +# NOTE: Three different device types are used in this test to verify +# expansion of non-partitioned block devices (loopback), partitioned +# block devices (scsi_debug), and non-disk file vdevs. ZFS volumes +# are not used in order to avoid a possible lock inversion when +# layering pools on zvols. # verify_runnable "global" -# See issue: https://github.com/zfsonlinux/zfs/issues/5771 -if is_linux; then - log_unsupported "Requires autoexpand property support" -fi - function cleanup { - if poolexists $TESTPOOL1; then - log_must zpool destroy $TESTPOOL1 - fi - - for i in 1 2 3; do - if datasetexists $VFS/vol$i; then - log_must zfs destroy $VFS/vol$i - fi - done + poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1 + + if losetup -a | grep -q $DEV1; then + losetup -d $DEV1 + fi + + rm -f $FILE_LO $FILE_RAW + + block_device_wait + unload_scsi_debug } log_onexit cleanup -log_assert "zpool can not expand if set autoexpand=off after LUN expansion" - -for i in 1 2 3; do - log_must zfs create -V $org_size $VFS/vol$i -done -block_device_wait +log_assert "zpool can not expand if set autoexpand=off after vdev expansion" for type in " " mirror raidz raidz2; do - log_must zpool create $TESTPOOL1 $type ${ZVOL_DEVDIR}/$VFS/vol1 \ - ${ZVOL_DEVDIR}/$VFS/vol2 ${ZVOL_DEVDIR}/$VFS/vol3 + log_note "Setting up loopback, scsi_debug, and file vdevs" + log_must truncate -s $org_size $FILE_LO + DEV1=$(losetup -f) + log_must losetup $DEV1 $FILE_LO + + load_scsi_debug $org_size_mb 1 1 1 '512b' + block_device_wait + DEV2=$(get_debug_device) + + log_must truncate -s $org_size $FILE_RAW + DEV3=$FILE_RAW + + # The -f is required since we're mixing disk and file vdevs. + log_must zpool create -f $TESTPOOL1 $type $DEV1 $DEV2 $DEV3 typeset autoexp=$(get_pool_prop autoexpand $TESTPOOL1) if [[ $autoexp != "off" ]]; then - log_fail "zpool $TESTPOOL1 autoexpand should off but is " \ + log_fail "zpool $TESTPOOL1 autoexpand should be off but is " \ "$autoexp" fi typeset prev_size=$(get_pool_prop size $TESTPOOL1) - for i in 1 2 3; do - log_must zfs set volsize=$exp_size $VFS/vol$i - done - sync - sleep 10 - sync + # Expand each device as appropriate being careful to add an artificial + # delay to ensure we get a single history entry for each. This makes + # is easier to verify each expansion for the striped pool case, since + # they will not be merged in to a single larger expansion. + log_note "Expanding loopback, scsi_debug, and file vdevs" + log_must truncate -s $exp_size $FILE_LO + log_must losetup -c $DEV1 + sleep 3 + + echo "2" > /sys/bus/pseudo/drivers/scsi_debug/virtual_gb + echo "1" > /sys/class/block/$DEV2/device/rescan + block_device_wait + sleep 3 + + log_must truncate -s $exp_size $FILE_RAW + + # This is far longer than we should need to wait, but let's be sure. + sleep 5 # check for zpool history for the pool size expansion zpool history -il $TESTPOOL1 | grep "pool '$TESTPOOL1' size:" | \ grep "vdev online" >/dev/null 2>&1 if [[ $? -eq 0 ]]; then - log_fail "pool $TESTPOOL1 is not autoexpand after LUN " \ + log_fail "pool $TESTPOOL1 is not autoexpand after vdev " \ "expansion" fi typeset expand_size=$(get_pool_prop size $TESTPOOL1) if [[ "$prev_size" != "$expand_size" ]]; then - log_fail "pool $TESTPOOL1 size changed after LUN expansion" + log_fail "pool $TESTPOOL1 size changed after vdev expansion" fi - log_must zpool destroy $TESTPOOL1 - - for i in 1 2 3; do - log_must zfs set volsize=$org_size $VFS/vol$i - done - + cleanup done -log_pass "zpool can not expand if set autoexpand=off after LUN expansion" +log_pass "zpool can not autoexpand if autoexpand=off after vdev expansion" diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_004_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_004_pos.ksh index 69481ba1a..8a4db824b 100755 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_004_pos.ksh +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_004_pos.ksh @@ -50,9 +50,7 @@ verify_runnable "global" function cleanup { - if poolexists $TESTPOOL1; then - log_must zpool destroy $TESTPOOL1 - fi + poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1 for i in 1 2 3; do [ -e ${TEMPFILE}.$i ] && log_must rm ${TEMPFILE}.$i diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_005_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_005_pos.ksh new file mode 100755 index 000000000..54ec73b67 --- /dev/null +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_005_pos.ksh @@ -0,0 +1,99 @@ +#! /bin/ksh -p +# +# CDDL HEADER START +# +# The contents of this file are subject to the terms of the +# Common Development and Distribution License (the "License"). +# You may not use this file except in compliance with the License. +# +# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +# or http://www.opensolaris.org/os/licensing. +# See the License for the specific language governing permissions +# and limitations under the License. +# +# When distributing Covered Code, include this CDDL HEADER in each +# file and include the License file at usr/src/OPENSOLARIS.LICENSE. +# If applicable, add the following below this CDDL HEADER, with the +# fields enclosed by brackets "[]" replaced with your own identifying +# information: Portions Copyright [yyyy] [name of copyright owner] +# +# CDDL HEADER END +# + +# +# Copyright 2009 Sun Microsystems, Inc. All rights reserved. +# Use is subject to license terms. +# + +# +# Copyright (c) 2012, 2018 by Delphix. All rights reserved. +# + +. $STF_SUITE/include/libtest.shlib +. $STF_SUITE/include/blkdev.shlib +. $STF_SUITE/tests/functional/cli_root/zpool_expand/zpool_expand.cfg + +# +# DESCRIPTION: +# +# STRATEGY: +# 1) Create a scsi_debug device and a pool based on it +# 2) Expand the device and rescan the scsi bus +# 3) Reopen the pool and check that it detects new available space +# 4) Online the device and check that the pool has been expanded +# + +verify_runnable "global" + +function cleanup +{ + poolexists $TESTPOOL1 && destroy_pool $TESTPOOL1 + unload_scsi_debug +} + +log_onexit cleanup + +log_assert "zpool based on scsi device can be expanded with zpool online -e" + +# run scsi_debug to create a device +MINVDEVSIZE_MB=$((MINVDEVSIZE / 1048576)) +load_scsi_debug $MINVDEVSIZE_MB 1 1 1 '512b' +block_device_wait +SDISK=$(get_debug_device) +log_must zpool create $TESTPOOL1 $SDISK + +typeset autoexp=$(get_pool_prop autoexpand $TESTPOOL1) +if [[ $autoexp != "off" ]]; then + log_fail "zpool $TESTPOOL1 autoexpand should be off but is $autoexp" +fi + +typeset prev_size=$(get_pool_prop size $TESTPOOL1) +log_note "original pool size: $prev_size" + +# resize the scsi_debug device +echo "5" > /sys/bus/pseudo/drivers/scsi_debug/virtual_gb +# rescan the device to detect the new size +echo "1" > /sys/class/block/$SDISK/device/rescan +block_device_wait + +# reopen the pool so ZFS can see the new space +log_must zpool reopen $TESTPOOL1 + +typeset expandsize=$(get_pool_prop expandsize $TESTPOOL1) +log_note "pool expandsize: $expandsize" +if [[ "$zpool_expandsize" = "-" ]]; then + log_fail "pool $TESTPOOL1 did not detect any " \ + "expandsize after reopen" +fi + +# online the device so the zpool will use the new space +log_must zpool online -e $TESTPOOL1 $SDISK + +typeset new_size=$(get_pool_prop size $TESTPOOL1) +log_note "new pool size: $new_size" +if [[ $new_size -le $prev_size ]]; then + log_fail "pool $TESTPOOL1 did not expand " \ + "after vdev expansion and zpool online -e" +fi + +log_pass "zpool based on scsi_debug can be expanded with reopen and online -e" diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/Makefile.am b/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/Makefile.am index f4686c04e..01ad68c81 100644 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/Makefile.am +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/Makefile.am @@ -7,7 +7,8 @@ dist_pkgdata_SCRIPTS = \ zpool_reopen_003_pos.ksh \ zpool_reopen_004_pos.ksh \ zpool_reopen_005_pos.ksh \ - zpool_reopen_006_neg.ksh + zpool_reopen_006_neg.ksh \ + zpool_reopen_007_pos.ksh dist_pkgdata_DATA = \ zpool_reopen.cfg \ diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/cleanup.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/cleanup.ksh index 99c51351c..a9fcef790 100755 --- a/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/cleanup.ksh +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/cleanup.ksh @@ -25,7 +25,7 @@ cleanup_devices $DISKS # Unplug the disk and remove scsi_debug module if is_linux; then for SDDEVICE in $(get_debug_device); do - unplug $SDDEVICE + remove_disk $SDDEVICE done unload_scsi_debug fi diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/zpool_reopen_007_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/zpool_reopen_007_pos.ksh new file mode 100755 index 000000000..4ba56af85 --- /dev/null +++ b/tests/zfs-tests/tests/functional/cli_root/zpool_reopen/zpool_reopen_007_pos.ksh @@ -0,0 +1,67 @@ +#!/bin/ksh -p + +# +# This file and its contents are supplied under the terms of the +# Common Development and Distribution License ("CDDL"), version 1.0. +# You may only use this file in accordance with the terms of version +# 1.0 of the CDDL. +# +# A full copy of the text of the CDDL should have accompanied this +# source. A copy of the CDDL is also available via the Internet at +# http://www.illumos.org/license/CDDL. +# + +# +# Copyright (c) 2018 by Lawrence Livermore National Security, LLC. +# + +. $STF_SUITE/tests/functional/cli_root/zpool_reopen/zpool_reopen.shlib + +# +# DESCRIPTION: +# Test zpool reopen while performing IO to the pool. +# Verify that no IO errors of any kind of reported. +# +# STRATEGY: +# 1. Create a non-redundant pool. +# 2. Repeat: +# a. Write files to the pool. +# b. Execute 'zpool reopen'. +# 3. Verify that no errors are reported by 'zpool status'. + +verify_runnable "global" + +function cleanup +{ + poolexists $TESTPOOL && destroy_pool $TESTPOOL +} + +log_assert "Testing zpool reopen with concurrent user IO" +log_onexit cleanup + +set_removed_disk +scsi_host=$(get_scsi_host $REMOVED_DISK) + +# 1. Create a non-redundant pool. +log_must zpool create $TESTPOOL $DISK1 $DISK2 $DISK3 + +for i in $(seq 10); do + # 3a. Write files in the background to the pool. + mkfile 64m /$TESTPOOL/data.$i & + + # 3b. Execute 'zpool reopen'. + log_must zpool reopen $TESTPOOL + + for disk in $DISK1 $DISK2 $DISK3; do + zpool status -P -v $TESTPOOL | grep $disk | \ + read -r name state rd wr cksum + log_must [ $state = "ONLINE" ] + log_must [ $rd -eq 0 ] + log_must [ $wr -eq 0 ] + log_must [ $cksum -eq 0 ] + done +done + +wait + +log_pass "Zpool reopen with concurrent user IO successful" |