aboutsummaryrefslogtreecommitdiffstats
path: root/tests
diff options
context:
space:
mode:
authorJohn Gallagher <[email protected]>2019-09-13 18:09:06 -0700
committerBrian Behlendorf <[email protected]>2019-09-13 18:09:06 -0700
commite60e158eff920825311c1e18b3631876eaaacb54 (patch)
tree03b5f6ff4855ae0fdc233d377d3c1939d1223912 /tests
parent7238cbd4d3ee7eadb3131c890d0692a49ea844af (diff)
Add subcommand to wait for background zfs activity to complete
Currently the best way to wait for the completion of a long-running operation in a pool, like a scrub or device removal, is to poll 'zpool status' and parse its output, which is neither efficient nor convenient. This change adds a 'wait' subcommand to the zpool command. When invoked, 'zpool wait' will block until a specified type of background activity completes. Currently, this subcommand can wait for any of the following: - Scrubs or resilvers to complete - Devices to initialized - Devices to be replaced - Devices to be removed - Checkpoints to be discarded - Background freeing to complete For example, a scrub that is in progress could be waited for by running zpool wait -t scrub <pool> This also adds a -w flag to the attach, checkpoint, initialize, replace, remove, and scrub subcommands. When used, this flag makes the operations kicked off by these subcommands synchronous instead of asynchronous. This functionality is implemented using a new ioctl. The type of activity to wait for is provided as input to the ioctl, and the ioctl blocks until all activity of that type has completed. An ioctl was used over other methods of kernel-userspace communiction primarily for the sake of portability. Porting Notes: This is ported from Delphix OS change DLPX-44432. The following changes were made while porting: - Added ZoL-style ioctl input declaration. - Reorganized error handling in zpool_initialize in libzfs to integrate better with changes made for TRIM support. - Fixed check for whether a checkpoint discard is in progress. Previously it also waited if the pool had a checkpoint, instead of just if a checkpoint was being discarded. - Exposed zfs_initialize_chunk_size as a ZoL-style tunable. - Updated more existing tests to make use of new 'zpool wait' functionality, tests that don't exist in Delphix OS. - Used existing ZoL tunable zfs_scan_suspend_progress, together with zinject, in place of a new tunable zfs_scan_max_blks_per_txg. - Added support for a non-integral interval argument to zpool wait. Future work: ZoL has support for trimming devices, which Delphix OS does not. In the future, 'zpool wait' could be extended to add the ability to wait for trim operations to complete. Reviewed-by: Matt Ahrens <[email protected]> Reviewed-by: John Kennedy <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: John Gallagher <[email protected]> Closes #9162
Diffstat (limited to 'tests')
-rw-r--r--tests/runfiles/linux.run15
-rw-r--r--tests/zfs-tests/cmd/libzfs_input_check/libzfs_input_check.c18
-rw-r--r--tests/zfs-tests/include/libtest.shlib9
-rw-r--r--tests/zfs-tests/tests/functional/cli_root/Makefile.am3
-rw-r--r--tests/zfs-tests/tests/functional/cli_root/zfs_destroy/zfs_destroy_common.kshlib13
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos.ksh6
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_004_pos.ksh4
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_005_pos.ksh12
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_encrypted_unloaded.ksh6
-rw-r--r--tests/zfs-tests/tests/functional/cli_root/zpool_wait/Makefile.am19
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/cleanup.ksh20
-rw-r--r--tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/Makefile.am10
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/cleanup.ksh20
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/setup.ksh32
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace.ksh71
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace_cancel.ksh64
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_resilver.ksh64
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_basic.ksh49
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_cancel.ksh66
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_flag.ksh52
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/setup.ksh23
-rw-r--r--tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib124
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_discard.ksh87
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_freeing.ksh112
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_basic.ksh63
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_cancel.ksh77
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_flag.ksh88
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_multiple.ksh83
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_no_activity.ksh52
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove.ksh85
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove_cancel.ksh62
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_usage.ksh47
-rw-r--r--tests/zfs-tests/tests/functional/cli_user/misc/Makefile.am3
-rwxr-xr-xtests/zfs-tests/tests/functional/cli_user/misc/zpool_wait_privilege.ksh35
-rwxr-xr-xtests/zfs-tests/tests/functional/events/events_002_pos.ksh4
-rwxr-xr-xtests/zfs-tests/tests/functional/online_offline/online_offline_002_neg.ksh10
-rw-r--r--tests/zfs-tests/tests/functional/redundancy/redundancy.kshlib9
-rwxr-xr-xtests/zfs-tests/tests/functional/redundancy/redundancy_004_neg.ksh7
-rw-r--r--tests/zfs-tests/tests/functional/removal/removal.kshlib4
-rwxr-xr-xtests/zfs-tests/tests/functional/removal/removal_with_errors.ksh4
40 files changed, 1463 insertions, 69 deletions
diff --git a/tests/runfiles/linux.run b/tests/runfiles/linux.run
index 182e6137a..e87e938ac 100644
--- a/tests/runfiles/linux.run
+++ b/tests/runfiles/linux.run
@@ -488,6 +488,19 @@ tests = ['zpool_upgrade_001_pos', 'zpool_upgrade_002_pos',
'zpool_upgrade_009_neg']
tags = ['functional', 'cli_root', 'zpool_upgrade']
+[tests/functional/cli_root/zpool_wait]
+tests = ['zpool_wait_discard', 'zpool_wait_freeing',
+ 'zpool_wait_initialize_basic', 'zpool_wait_initialize_cancel',
+ 'zpool_wait_initialize_flag', 'zpool_wait_multiple',
+ 'zpool_wait_no_activity', 'zpool_wait_remove', 'zpool_wait_remove_cancel',
+ 'zpool_wait_usage']
+tags = ['functional', 'cli_root', 'zpool_wait']
+
+[tests/functional/cli_root/zpool_wait/scan]
+tests = ['zpool_wait_replace_cancel', 'zpool_wait_resilver', 'zpool_wait_scrub_cancel',
+ 'zpool_wait_replace', 'zpool_wait_scrub_basic', 'zpool_wait_scrub_flag']
+tags = ['functional', 'cli_root', 'zpool_wait']
+
[tests/functional/cli_user/misc]
tests = ['zdb_001_neg', 'zfs_001_neg', 'zfs_allow_001_neg',
'zfs_clone_001_neg', 'zfs_create_001_neg', 'zfs_destroy_001_neg',
@@ -503,7 +516,7 @@ tests = ['zdb_001_neg', 'zfs_001_neg', 'zfs_allow_001_neg',
'zpool_offline_001_neg', 'zpool_online_001_neg', 'zpool_remove_001_neg',
'zpool_replace_001_neg', 'zpool_scrub_001_neg', 'zpool_set_001_neg',
'zpool_status_001_neg', 'zpool_upgrade_001_neg', 'arcstat_001_pos',
- 'arc_summary_001_pos', 'arc_summary_002_neg']
+ 'arc_summary_001_pos', 'arc_summary_002_neg', 'zpool_wait_privilege']
user =
tags = ['functional', 'cli_user', 'misc']
diff --git a/tests/zfs-tests/cmd/libzfs_input_check/libzfs_input_check.c b/tests/zfs-tests/cmd/libzfs_input_check/libzfs_input_check.c
index 38bc379b0..d47954e2b 100644
--- a/tests/zfs-tests/cmd/libzfs_input_check/libzfs_input_check.c
+++ b/tests/zfs-tests/cmd/libzfs_input_check/libzfs_input_check.c
@@ -720,6 +720,21 @@ test_get_bookmark_props(const char *bookmark)
}
static void
+test_wait(const char *pool)
+{
+ nvlist_t *required = fnvlist_alloc();
+ nvlist_t *optional = fnvlist_alloc();
+
+ fnvlist_add_int32(required, "wait_activity", 2);
+ fnvlist_add_uint64(optional, "wait_tag", 0xdeadbeefdeadbeef);
+
+ IOC_INPUT_TEST(ZFS_IOC_WAIT, pool, required, optional, EINVAL);
+
+ nvlist_free(required);
+ nvlist_free(optional);
+}
+
+static void
zfs_ioc_input_tests(const char *pool)
{
char filepath[] = "/tmp/ioc_test_file_XXXXXX";
@@ -805,6 +820,8 @@ zfs_ioc_input_tests(const char *pool)
test_vdev_initialize(pool);
test_vdev_trim(pool);
+ test_wait(pool);
+
/*
* cleanup
*/
@@ -954,6 +971,7 @@ validate_ioc_values(void)
CHECK(ZFS_IOC_BASE + 80 == ZFS_IOC_POOL_TRIM);
CHECK(ZFS_IOC_BASE + 81 == ZFS_IOC_REDACT);
CHECK(ZFS_IOC_BASE + 82 == ZFS_IOC_GET_BOOKMARK_PROPS);
+ CHECK(ZFS_IOC_BASE + 83 == ZFS_IOC_WAIT);
CHECK(LINUX_IOC_BASE + 1 == ZFS_IOC_EVENTS_NEXT);
CHECK(LINUX_IOC_BASE + 2 == ZFS_IOC_EVENTS_CLEAR);
CHECK(LINUX_IOC_BASE + 3 == ZFS_IOC_EVENTS_SEEK);
diff --git a/tests/zfs-tests/include/libtest.shlib b/tests/zfs-tests/include/libtest.shlib
index 8348f8c11..776c953b1 100644
--- a/tests/zfs-tests/include/libtest.shlib
+++ b/tests/zfs-tests/include/libtest.shlib
@@ -2130,7 +2130,7 @@ function check_pool_status # pool token keyword <verbose>
}
#
-# These 6 following functions are instance of check_pool_status()
+# The following functions are instance of check_pool_status()
# is_pool_resilvering - to check if the pool is resilver in progress
# is_pool_resilvered - to check if the pool is resilver completed
# is_pool_scrubbing - to check if the pool is scrub in progress
@@ -2139,6 +2139,7 @@ function check_pool_status # pool token keyword <verbose>
# is_pool_scrub_paused - to check if the pool has scrub paused
# is_pool_removing - to check if the pool is removing a vdev
# is_pool_removed - to check if the pool is remove completed
+# is_pool_discarding - to check if the pool has checkpoint being discarded
#
function is_pool_resilvering #pool <verbose>
{
@@ -2188,6 +2189,12 @@ function is_pool_removed #pool
return $?
}
+function is_pool_discarding #pool
+{
+ check_pool_status "$1" "checkpoint" "discarding"
+ return $?
+}
+
function wait_for_degraded
{
typeset pool=$1
diff --git a/tests/zfs-tests/tests/functional/cli_root/Makefile.am b/tests/zfs-tests/tests/functional/cli_root/Makefile.am
index 58f789514..01af9d6b9 100644
--- a/tests/zfs-tests/tests/functional/cli_root/Makefile.am
+++ b/tests/zfs-tests/tests/functional/cli_root/Makefile.am
@@ -59,4 +59,5 @@ SUBDIRS = \
zpool_status \
zpool_sync \
zpool_trim \
- zpool_upgrade
+ zpool_upgrade \
+ zpool_wait
diff --git a/tests/zfs-tests/tests/functional/cli_root/zfs_destroy/zfs_destroy_common.kshlib b/tests/zfs-tests/tests/functional/cli_root/zfs_destroy/zfs_destroy_common.kshlib
index 504e3a580..31b880c1d 100644
--- a/tests/zfs-tests/tests/functional/cli_root/zfs_destroy/zfs_destroy_common.kshlib
+++ b/tests/zfs-tests/tests/functional/cli_root/zfs_destroy/zfs_destroy_common.kshlib
@@ -155,21 +155,12 @@ function check_livelist_exists
log_fail "zdb could not find Livelist"
}
-# Wait for the deferred destroy livelists to be removed
-function wait_for_deferred_destroy
-{
- sync
- deleted=$(zdb -vvvvv $TESTPOOL | grep "Deleted Livelist")
- while [[ "$deleted" != "" ]]; do
- deleted=$(zdb -vvvvv $TESTPOOL | grep "Deleted Livelist")
- done
-}
-
# Check that a livelist has been removed, waiting for deferred destroy entries
# to be cleared from zdb.
function check_livelist_gone
{
- wait_for_deferred_destroy
+ log_must zpool wait -t free $TESTPOOL
+ zpool sync
zdb -vvvvv $TESTPOOL | grep "Livelist" && \
log_fail "zdb found Livelist after the clone is deleted."
}
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos.ksh
index 79ceaabd0..98b414072 100755
--- a/tests/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos.ksh
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos.ksh
@@ -176,11 +176,7 @@ function do_testing #<clear type> <vdevs>
dd if=/dev/zero of=$fbase.$i seek=512 bs=1024 count=$wcount conv=notrunc \
> /dev/null 2>&1
log_must sync
- log_must zpool scrub $TESTPOOL1
- # Wait for the completion of scrub operation
- while is_pool_scrubbing $TESTPOOL1; do
- sleep 1
- done
+ log_must zpool scrub -w $TESTPOOL1
check_err $TESTPOOL1 && \
log_fail "No error generated."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_004_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_004_pos.ksh
index 9b6274cd1..92450d3b9 100755
--- a/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_004_pos.ksh
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_004_pos.ksh
@@ -73,8 +73,6 @@ log_must is_pool_resilvering $TESTPOOL
log_mustnot zpool scrub $TESTPOOL
log_must set_tunable32 zfs_scan_suspend_progress 0
-while ! is_pool_resilvered $TESTPOOL; do
- sleep 1
-done
+log_must zpool wait -t resilver $TESTPOOL
log_pass "Resilver prevent scrub from starting until the resilver completes"
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_005_pos.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_005_pos.ksh
index 8db6ae980..69a33983d 100755
--- a/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_005_pos.ksh
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_005_pos.ksh
@@ -48,18 +48,10 @@ log_assert "When scrubbing, detach device should not break system."
log_must zpool scrub $TESTPOOL
log_must zpool detach $TESTPOOL $DISK2
-log_must zpool attach $TESTPOOL $DISK1 $DISK2
-
-while ! is_pool_resilvered $TESTPOOL; do
- sleep 1
-done
+log_must zpool attach -w $TESTPOOL $DISK1 $DISK2
log_must zpool scrub $TESTPOOL
log_must zpool detach $TESTPOOL $DISK1
-log_must zpool attach $TESTPOOL $DISK2 $DISK1
-
-while ! is_pool_resilvered $TESTPOOL; do
- sleep 1
-done
+log_must zpool attach -w $TESTPOOL $DISK2 $DISK1
log_pass "When scrubbing, detach device should not break system."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_encrypted_unloaded.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_encrypted_unloaded.ksh
index 483a683bd..a8c15424d 100755
--- a/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_encrypted_unloaded.ksh
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_scrub/zpool_scrub_encrypted_unloaded.ksh
@@ -58,11 +58,7 @@ done
log_must zfs unmount $TESTPOOL/$TESTFS2
log_must zfs unload-key $TESTPOOL/$TESTFS2
-log_must zpool scrub $TESTPOOL
-
-while ! is_pool_scrubbed $TESTPOOL; do
- sleep 1
-done
+log_must zpool scrub -w $TESTPOOL
log_must check_pool_status $TESTPOOL "scan" "with 0 errors"
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/Makefile.am b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/Makefile.am
new file mode 100644
index 000000000..96e35e2a1
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/Makefile.am
@@ -0,0 +1,19 @@
+pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/cli_root/zpool_wait
+dist_pkgdata_SCRIPTS = \
+ setup.ksh \
+ cleanup.ksh \
+ zpool_wait_discard.ksh \
+ zpool_wait_freeing.ksh \
+ zpool_wait_initialize_basic.ksh \
+ zpool_wait_initialize_cancel.ksh \
+ zpool_wait_initialize_flag.ksh \
+ zpool_wait_multiple.ksh \
+ zpool_wait_no_activity.ksh \
+ zpool_wait_remove.ksh \
+ zpool_wait_remove_cancel.ksh \
+ zpool_wait_usage.ksh
+
+dist_pkgdata_DATA = \
+ zpool_wait.kshlib
+
+SUBDIRS = scan
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/cleanup.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/cleanup.ksh
new file mode 100755
index 000000000..456d2d0c2
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/cleanup.ksh
@@ -0,0 +1,20 @@
+#!/bin/ksh -p
+#
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+
+default_cleanup
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/Makefile.am b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/Makefile.am
new file mode 100644
index 000000000..6a21cac4f
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/Makefile.am
@@ -0,0 +1,10 @@
+pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/cli_root/zpool_wait/scan
+dist_pkgdata_SCRIPTS = \
+ setup.ksh \
+ cleanup.ksh \
+ zpool_wait_replace.ksh \
+ zpool_wait_replace_cancel.ksh \
+ zpool_wait_resilver.ksh \
+ zpool_wait_scrub_basic.ksh \
+ zpool_wait_scrub_cancel.ksh \
+ zpool_wait_scrub_flag.ksh
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/cleanup.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/cleanup.ksh
new file mode 100755
index 000000000..456d2d0c2
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/cleanup.ksh
@@ -0,0 +1,20 @@
+#!/bin/ksh -p
+#
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+
+default_cleanup
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/setup.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/setup.ksh
new file mode 100755
index 000000000..8a6a1a25b
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/setup.ksh
@@ -0,0 +1,32 @@
+#!/bin/ksh -p
+#
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+verify_runnable "global"
+verify_disk_count $DISKS 3
+
+#
+# Set up a pool for use in the tests that do scrubbing and resilvering. Each
+# test leaves the pool in the same state as when it started, so it is safe to
+# share the same setup.
+#
+log_must zpool create -f $TESTPOOL $DISK1
+log_must dd if=/dev/urandom of="/$TESTPOOL/testfile" bs=1k count=256k
+
+log_pass
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace.ksh
new file mode 100755
index 000000000..06df7b51c
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace.ksh
@@ -0,0 +1,71 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when a replacing disks.
+#
+# STRATEGY:
+# 1. Attach a disk to pool to form two-way mirror.
+# 2. Start a replacement of the new disk.
+# 3. Start 'zpool wait'.
+# 4. Monitor the waiting process to make sure it returns neither too soon nor
+# too late.
+# 5. Repeat 2-4, except using the '-w' flag with 'zpool replace' instead of
+# using 'zpool wait'.
+#
+
+function cleanup
+{
+ remove_io_delay
+ kill_if_running $pid
+ get_disklist $TESTPOOL | grep $DISK2 >/dev/null && \
+ log_must zpool detach $TESTPOOL $DISK2
+ get_disklist $TESTPOOL | grep $DISK3 >/dev/null && \
+ log_must zpool detach $TESTPOOL $DISK3
+}
+
+function in_progress
+{
+ zpool status $TESTPOOL | grep 'replacing-' >/dev/null
+}
+
+typeset pid
+
+log_onexit cleanup
+
+log_must zpool attach -w $TESTPOOL $DISK1 $DISK2
+
+add_io_delay $TESTPOOL
+
+# Test 'zpool wait -t replace'
+log_must zpool replace $TESTPOOL $DISK2 $DISK3
+log_bkgrnd zpool wait -t replace $TESTPOOL
+pid=$!
+check_while_waiting $pid in_progress
+
+# Test 'zpool replace -w'
+log_bkgrnd zpool replace -w $TESTPOOL $DISK3 $DISK2
+pid=$!
+while ! is_pool_resilvering $TESTPOOL && proc_exists $pid; do
+ log_must sleep .5
+done
+check_while_waiting $pid in_progress
+
+log_pass "'zpool wait -t replace' and 'zpool replace -w' work."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace_cancel.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace_cancel.ksh
new file mode 100755
index 000000000..b6c60b0c5
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace_cancel.ksh
@@ -0,0 +1,64 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when a replacing disk is detached before the replacement
+# completes.
+#
+# STRATEGY:
+# 1. Attach a disk to pool to form two-way mirror.
+# 2. Modify tunable so that resilver won't complete while test is running.
+# 3. Start a replacement of the new disk.
+# 4. Start a process that waits for the replace.
+# 5. Wait a few seconds and then check that the wait process is actually
+# waiting.
+# 6. Cancel the replacement by detaching the replacing disk.
+# 7. Check that the wait process returns reasonably promptly.
+#
+
+function cleanup
+{
+ log_must set_tunable32 zfs_scan_suspend_progress 0
+ kill_if_running $pid
+ get_disklist $TESTPOOL | grep $DISK2 >/dev/null && \
+ log_must zpool detach $TESTPOOL $DISK2
+ get_disklist $TESTPOOL | grep $DISK3 >/dev/null && \
+ log_must zpool detach $TESTPOOL $DISK3
+}
+
+typeset pid
+
+log_onexit cleanup
+
+log_must zpool attach -w $TESTPOOL $DISK1 $DISK2
+
+log_must set_tunable32 zfs_scan_suspend_progress 1
+
+log_must zpool replace $TESTPOOL $DISK2 $DISK3
+log_bkgrnd zpool wait -t replace $TESTPOOL
+pid=$!
+
+log_must sleep 3
+proc_must_exist $pid
+
+log_must zpool detach $TESTPOOL $DISK3
+bkgrnd_proc_succeeded $pid
+
+log_pass "'zpool wait -t replace' returns when replacing disk is detached."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_resilver.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_resilver.ksh
new file mode 100755
index 000000000..a938901f7
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_resilver.ksh
@@ -0,0 +1,64 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when waiting for resilvering to complete.
+#
+# STRATEGY:
+# 1. Attach a device to the pool so that resilvering starts.
+# 2. Start 'zpool wait'.
+# 3. Monitor the waiting process to make sure it returns neither too soon nor
+# too late.
+# 4. Repeat 1-3, except using the '-w' flag with 'zpool attach' instead of using
+# 'zpool wait'.
+#
+
+function cleanup
+{
+ remove_io_delay
+ kill_if_running $pid
+ get_disklist $TESTPOOL | grep $DISK2 >/dev/null && \
+ log_must zpool detach $TESTPOOL $DISK2
+}
+
+typeset -r IN_PROGRESS_CHECK="is_pool_resilvering $TESTPOOL"
+typeset pid
+
+log_onexit cleanup
+
+add_io_delay $TESTPOOL
+
+# Test 'zpool wait -t resilver'
+log_must zpool attach $TESTPOOL $DISK1 $DISK2
+log_bkgrnd zpool wait -t resilver $TESTPOOL
+pid=$!
+check_while_waiting $pid "$IN_PROGRESS_CHECK"
+
+log_must zpool detach $TESTPOOL $DISK2
+
+# Test 'zpool attach -w'
+log_bkgrnd zpool attach -w $TESTPOOL $DISK1 $DISK2
+pid=$!
+while ! is_pool_resilvering $TESTPOOL && proc_exists $pid; do
+ log_must sleep .5
+done
+check_while_waiting $pid "$IN_PROGRESS_CHECK"
+
+log_pass "'zpool wait -t resilver' and 'zpool attach -w' work."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_basic.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_basic.ksh
new file mode 100755
index 000000000..d4bb17081
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_basic.ksh
@@ -0,0 +1,49 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when waiting for a scrub to complete.
+#
+# STRATEGY:
+# 1. Start a scrub.
+# 2. Start 'zpool wait -t scrub'.
+# 3. Monitor the waiting process to make sure it returns neither too soon nor
+# too late.
+#
+
+function cleanup
+{
+ remove_io_delay
+ kill_if_running $pid
+}
+
+typeset pid
+
+log_onexit cleanup
+
+# Slow down scrub so that we actually have something to wait for.
+add_io_delay $TESTPOOL
+
+log_must zpool scrub $TESTPOOL
+log_bkgrnd zpool wait -t scrub $TESTPOOL
+pid=$!
+check_while_waiting $pid "is_pool_scrubbing $TESTPOOL"
+
+log_pass "'zpool wait -t scrub' works."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_cancel.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_cancel.ksh
new file mode 100755
index 000000000..1f7f1e42b
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_cancel.ksh
@@ -0,0 +1,66 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when a scrub is paused or canceled.
+#
+# STRATEGY:
+# 1. Modify tunable so that scrubs won't complete while test is running.
+# 2. Start a scrub.
+# 3. Start a process that waits for the scrub.
+# 4. Wait a few seconds and then check that the wait process is actually
+# waiting.
+# 5. Pause the scrub.
+# 6. Check that the wait process returns reasonably promptly.
+# 7. Repeat 2-6, except stop the scrub instead of pausing it.
+#
+
+function cleanup
+{
+ log_must set_tunable32 zfs_scan_suspend_progress 0
+ kill_if_running $pid
+ is_pool_scrubbing $TESTPOOL && log_must zpool scrub -s $TESTPOOL
+}
+
+function do_test
+{
+ typeset stop_cmd=$1
+
+ log_must zpool scrub $TESTPOOL
+ log_bkgrnd zpool wait -t scrub $TESTPOOL
+ pid=$!
+
+ log_must sleep 3
+ proc_must_exist $pid
+
+ log_must eval "$stop_cmd"
+ bkgrnd_proc_succeeded $pid
+}
+
+typeset pid
+
+log_onexit cleanup
+
+log_must set_tunable32 zfs_scan_suspend_progress 1
+
+do_test "zpool scrub -p $TESTPOOL"
+do_test "zpool scrub -s $TESTPOOL"
+
+log_pass "'zpool wait -t scrub' works when scrub is canceled."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_flag.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_flag.ksh
new file mode 100755
index 000000000..9b0da29ad
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_scrub_flag.ksh
@@ -0,0 +1,52 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool scrub -w' waits while scrub is in progress.
+#
+# STRATEGY:
+# 1. Start a scrub with the -w flag.
+# 2. Wait a few seconds and then check that the wait process is actually
+# waiting.
+# 3. Stop the scrub, make sure that the command returns reasonably promptly.
+#
+
+function cleanup
+{
+ log_must set_tunable32 zfs_scan_suspend_progress 0
+ kill_if_running $pid
+}
+
+typeset pid
+
+log_onexit cleanup
+
+log_must set_tunable32 zfs_scan_suspend_progress 1
+
+log_bkgrnd zpool scrub -w $TESTPOOL
+pid=$!
+
+log_must sleep 3
+proc_must_exist $pid
+
+log_must zpool scrub -s $TESTPOOL
+bkgrnd_proc_succeeded $pid
+
+log_pass "'zpool scrub -w' works."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/setup.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/setup.ksh
new file mode 100755
index 000000000..5a9af1846
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/setup.ksh
@@ -0,0 +1,23 @@
+#!/bin/ksh -p
+#
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+verify_runnable "global"
+
+verify_disk_count $DISKS 3
+
+log_pass
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
new file mode 100644
index 000000000..b413f6e9f
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
@@ -0,0 +1,124 @@
+#!/bin/ksh
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+typeset -a disk_array=($(find_disks $DISKS))
+
+typeset -r DISK1=${disk_array[0]}
+typeset -r DISK2=${disk_array[1]}
+typeset -r DISK3=${disk_array[2]}
+
+#
+# When the condition it is waiting for becomes true, 'zpool wait' should return
+# promptly. We want to enforce this, but any check will be racey because it will
+# take some small but indeterminate amount of time for the waiting thread to be
+# woken up and for the process to exit.
+#
+# To deal with this, we provide a grace period after the condition becomes true
+# during which 'zpool wait' can exit. If it hasn't exited by the time the grace
+# period expires we assume something is wrong and fail the test. While there is
+# no value that can really be correct, the idea is we choose something large
+# enough that it shouldn't cause issues in practice.
+#
+typeset -r WAIT_EXIT_GRACE=2.0
+
+function add_io_delay # pool
+{
+ for disk in $(get_disklist $1); do
+ log_must zinject -d $disk -D20:1 $1
+ done
+}
+
+function remove_io_delay
+{
+ log_must zinject -c all
+}
+
+function proc_exists # pid
+{
+ ps -p $1 >/dev/null
+}
+
+function proc_must_exist # pid
+{
+ proc_exists $1 || log_fail "zpool process exited too soon"
+}
+
+function proc_must_not_exist # pid
+{
+ proc_exists $1 && log_fail "zpool process took too long to exit"
+}
+
+function get_time
+{
+ date +'%H:%M:%S'
+}
+
+function kill_if_running
+{
+ typeset pid=$1
+ [[ $pid ]] && proc_exists $pid && log_must kill -s TERM $pid
+}
+
+# Log a command and then start it running in the background
+function log_bkgrnd
+{
+ log_note "$(get_time) Starting cmd in background '$@'"
+ "$@" &
+}
+
+# Check that a background process has completed and exited with a status of 0
+function bkgrnd_proc_succeeded
+{
+ typeset pid=$1
+
+ log_must sleep $WAIT_EXIT_GRACE
+
+ proc_must_not_exist $pid
+ wait $pid || log_fail "zpool process exited with status $?"
+ log_note "$(get_time) wait completed successfully"
+}
+
+#
+# Check that 'zpool wait' returns reasonably promptly after the condition
+# waited for becomes true, and not before.
+#
+function check_while_waiting
+{
+ # The pid of the waiting process
+ typeset wait_proc_pid=$1
+ # A check that should be true while the activity is in progress
+ typeset activity_check=$2
+
+ log_note "$(get_time) waiting for process $wait_proc_pid using" \
+ "activity check '$activity_check'"
+ while proc_exists $wait_proc_pid && eval "$activity_check"; do
+ log_must sleep .5
+ done
+
+ #
+ # If the activity being waited on is still in progress, then zpool wait
+ # exited too soon.
+ #
+ log_mustnot eval "$activity_check"
+
+ bkgrnd_proc_succeeded $wait_proc_pid
+}
+
+# Whether any vdev in the given pool is initializing
+function is_vdev_initializing # pool
+{
+ zpool status -i "$1" | grep 'initialized, started' >/dev/null
+}
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_discard.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_discard.ksh
new file mode 100755
index 000000000..47cf374d9
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_discard.ksh
@@ -0,0 +1,87 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when waiting for checkpoint discard to complete.
+#
+# STRATEGY:
+# 1. Create a pool.
+# 2. Add some data to the pool.
+# 3. Checkpoint the pool and delete the data so that the space is unique to the
+# checkpoint.
+# 4. Discard the checkpoint using the '-w' flag.
+# 5. Monitor the waiting process to make sure it returns neither too soon nor
+# too late.
+# 6. Repeat 2-5, but using 'zpool wait' instead of the '-w' flag.
+#
+
+function cleanup
+{
+ log_must zinject -c all
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+ kill_if_running $pid
+
+ [[ $default_mem_limit ]] && log_must set_tunable64 \
+ zfs_spa_discard_memory_limit $default_mem_limit
+}
+
+function do_test
+{
+ typeset use_wait_flag=$1
+
+ log_must dd if=/dev/urandom of="$TESTFILE" bs=128k count=1k
+ log_must zpool checkpoint $TESTPOOL
+
+ # Make sure bulk of space is unique to checkpoint
+ log_must rm "$TESTFILE"
+
+ log_must zinject -d $DISK1 -D20:1 $TESTPOOL
+
+ if $use_wait_flag; then
+ log_bkgrnd zpool checkpoint -dw $TESTPOOL
+ pid=$!
+
+ while ! is_pool_discarding $TESTPOOL && proc_exists $pid; do
+ log_must sleep .5
+ done
+ else
+ log_must zpool checkpoint -d $TESTPOOL
+ log_bkgrnd zpool wait -t discard $TESTPOOL
+ pid=$!
+ fi
+
+ check_while_waiting $pid "is_pool_discarding $TESTPOOL"
+ log_must zinject -c all
+}
+
+typeset -r TESTFILE="/$TESTPOOL/testfile"
+typeset pid default_mem_limit
+
+log_onexit cleanup
+
+default_mem_limit=$(get_tunable zfs_spa_discard_memory_limit)
+log_must set_tunable64 zfs_spa_discard_memory_limit 32
+
+log_must zpool create $TESTPOOL $DISK1
+
+do_test true
+do_test false
+
+log_pass "'zpool wait -t discard' and 'zpool checkpoint -dw' work."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_freeing.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_freeing.ksh
new file mode 100755
index 000000000..88dbfb8cc
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_freeing.ksh
@@ -0,0 +1,112 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when waiting for background freeing to complete.
+#
+# STRATEGY:
+# 1. Create a pool.
+# 2. Modify tunables to make sure freeing is slow enough to observe.
+# 3. Create a file system with some data.
+# 4. Destroy the file system and call 'zpool wait'.
+# 5. Monitor the waiting process to make sure it returns neither too soon nor
+# too late.
+# 6. Repeat 3-5, except destroy a snapshot instead of a filesystem.
+# 7. Repeat 3-5, except destroy a clone.
+#
+
+function cleanup
+{
+ log_must set_tunable64 zfs_async_block_max_blocks $default_async_block_max_blocks
+ log_must set_tunable64 zfs_livelist_max_entries $default_max_livelist_entries
+ log_must set_tunable64 zfs_livelist_min_percent_shared $default_min_pct_shared
+
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+ kill_if_running $pid
+}
+
+function test_wait
+{
+ log_bkgrnd zpool wait -t free $TESTPOOL
+ pid=$!
+ check_while_waiting $pid '[[ $(get_pool_prop freeing $TESTPOOL) != "0" ]]'
+}
+
+typeset -r FS="$TESTPOOL/$TESTFS1"
+typeset -r SNAP="$FS@snap1"
+typeset -r CLONE="$TESTPOOL/clone"
+typeset pid default_max_livelist_entries default_min_pct_shared
+typeset default_async_block_max_blocks
+
+log_onexit cleanup
+
+log_must zpool create $TESTPOOL $DISK1
+
+#
+# Limit the number of blocks that can be freed in a single txg. This slows down
+# freeing so that we actually have something to wait for.
+#
+default_async_block_max_blocks=$(get_tunable zfs_async_block_max_blocks)
+log_must set_tunable64 zfs_async_block_max_blocks 8
+#
+# Space from clones gets freed one livelist per txg instead of being controlled
+# by zfs_async_block_max_blocks. Limit the rate at which space is freed by
+# limiting the size of livelists so that we end up with a number of them.
+#
+default_max_livelist_entries=$(get_tunable zfs_livelist_max_entries)
+log_must set_tunable64 zfs_livelist_max_entries 16
+# Don't disable livelists, no matter how much clone diverges from snapshot
+default_min_pct_shared=$(get_tunable zfs_livelist_min_percent_shared)
+log_must set_tunable64 zfs_livelist_min_percent_shared -1
+
+#
+# Test waiting for space from destroyed filesystem to be freed
+#
+log_must zfs create "$FS"
+log_must dd if=/dev/zero of="/$FS/testfile" bs=1M count=128
+log_must zfs destroy "$FS"
+test_wait
+
+#
+# Test waiting for space from destroyed snapshot to be freed
+#
+log_must zfs create "$FS"
+log_must dd if=/dev/zero of="/$FS/testfile" bs=1M count=128
+log_must zfs snapshot "$SNAP"
+# Make sure bulk of space is unique to snapshot
+log_must rm "/$FS/testfile"
+log_must zfs destroy "$SNAP"
+test_wait
+
+#
+# Test waiting for space from destroyed clone to be freed
+#
+log_must zfs snapshot "$SNAP"
+log_must zfs clone "$SNAP" "$CLONE"
+# Add some data to the clone
+for i in {1..50}; do
+ log_must dd if=/dev/urandom of="/$CLONE/testfile$i" bs=1k count=512
+ # Force each new file to be tracked by a new livelist
+ log_must zpool sync $TESTPOOL
+done
+log_must zfs destroy "$CLONE"
+test_wait
+
+log_pass "'zpool wait -t freeing' works."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_basic.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_basic.ksh
new file mode 100755
index 000000000..e19360e85
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_basic.ksh
@@ -0,0 +1,63 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when waiting for devices to complete initializing
+#
+# STRATEGY:
+# 1. Create a pool.
+# 2. Modify a tunable to make sure initializing is slow enough to observe.
+# 3. Start initializing the vdev in the pool.
+# 4. Start 'zpool wait'.
+# 5. Monitor the waiting process to make sure it returns neither too soon nor
+# too late.
+#
+
+function cleanup
+{
+ kill_if_running $pid
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+
+ [[ -d "$TESTDIR" ]] && log_must rm -r "$TESTDIR"
+
+ [[ "$default_chunk_sz" ]] && \
+ log_must set_tunable64 zfs_initialize_chunk_size $default_chunk_sz
+}
+
+typeset -r FILE_VDEV="$TESTDIR/file_vdev"
+typeset pid default_chunk_sz
+
+log_onexit cleanup
+
+default_chunk_sz=$(get_tunable zfs_initialize_chunk_size)
+log_must set_tunable64 zfs_initialize_chunk_size 2048
+
+log_must mkdir "$TESTDIR"
+log_must mkfile 256M "$FILE_VDEV"
+log_must zpool create -f $TESTPOOL "$FILE_VDEV"
+
+log_must zpool initialize $TESTPOOL "$FILE_VDEV"
+
+log_bkgrnd zpool wait -t initialize $TESTPOOL
+pid=$!
+
+check_while_waiting $pid "is_vdev_initializing $TESTPOOL"
+
+log_pass "'zpool wait -t initialize' works."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_cancel.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_cancel.ksh
new file mode 100755
index 000000000..ced0a482f
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_cancel.ksh
@@ -0,0 +1,77 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when an initialization operation is canceled.
+#
+# STRATEGY:
+# 1. Create a pool.
+# 2. Modify a tunable to make sure initializing is slow enough that it won't
+# complete before the test finishes.
+# 3. Start initializing the vdev in the pool.
+# 4. Start 'zpool wait'.
+# 5. Wait a few seconds and then check that the wait process is actually
+# waiting.
+# 6. Cancel the initialization of the device.
+# 7. Check that the wait process returns reasonably promptly.
+# 8. Repeat 3-7, except pause the initialization instead of canceling it.
+#
+
+function cleanup
+{
+ kill_if_running $pid
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+
+ [[ "$default_chunk_sz" ]] &&
+ log_must set_tunable64 zfs_initialize_chunk_size $default_chunk_sz
+}
+
+function do_test
+{
+ typeset stop_cmd=$1
+
+ log_must zpool initialize $TESTPOOL $DISK1
+
+ log_bkgrnd zpool wait -t initialize $TESTPOOL
+ pid=$!
+
+ # Make sure that we are really waiting
+ log_must sleep 3
+ proc_must_exist $pid
+
+ # Stop initialization and make sure process returns
+ log_must eval "$stop_cmd"
+ bkgrnd_proc_succeeded $pid
+}
+
+typeset pid default_chunk_sz
+
+log_onexit cleanup
+
+# Make sure the initialization takes a while
+default_chunk_sz=$(get_tunable zfs_initialize_chunk_size)
+log_must set_tunable64 zfs_initialize_chunk_size 512
+
+log_must zpool create $TESTPOOL $DISK1
+
+do_test "zpool initialize -c $TESTPOOL $DISK1"
+do_test "zpool initialize -s $TESTPOOL $DISK1"
+
+log_pass "'zpool wait' works when initialization is stopped before completion."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_flag.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_flag.ksh
new file mode 100755
index 000000000..c95e8661b
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_initialize_flag.ksh
@@ -0,0 +1,88 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# -w flag for 'zpool initialize' waits for the completion of all and only those
+# initializations kicked off by that invocation.
+#
+# STRATEGY:
+# 1. Create a pool with 3 disks.
+# 2. Start initializing disks 1 and 2 with one invocation of
+# 'zpool initialize -w'
+# 3. Start initializing disk 3 with a second invocation of 'zpool initialize -w'
+# 4. Cancel the initialization of disk 1. Check that neither waiting process
+# exits.
+# 5. Cancel the initialization of disk 3. Check that only the second waiting
+# process exits.
+# 6. Cancel the initialization of disk 2. Check that the first waiting process
+# exits.
+#
+
+function cleanup
+{
+ kill_if_running $init12_pid
+ kill_if_running $init3_pid
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+
+ [[ "$default_chunk_sz" ]] &&
+ log_must set_tunable64 zfs_initialize_chunk_size $default_chunk_sz
+}
+
+typeset init12_pid init3_pid default_chunk_sz
+
+log_onexit cleanup
+
+log_must zpool create -f $TESTPOOL $DISK1 $DISK2 $DISK3
+
+# Make sure the initialization takes a while
+default_chunk_sz=$(get_tunable zfs_initialize_chunk_size)
+log_must set_tunable64 zfs_initialize_chunk_size 512
+
+log_bkgrnd zpool initialize -w $TESTPOOL $DISK1 $DISK2
+init12_pid=$!
+log_bkgrnd zpool initialize -w $TESTPOOL $DISK3
+init3_pid=$!
+
+# Make sure that we are really waiting
+log_must sleep 3
+proc_must_exist $init12_pid
+proc_must_exist $init3_pid
+
+#
+# Cancel initialization of one of disks started by init12, make sure neither
+# process exits
+#
+log_must zpool initialize -c $TESTPOOL $DISK1
+proc_must_exist $init12_pid
+proc_must_exist $init3_pid
+
+#
+# Cancel initialization started by init3, make sure that process exits, but
+# init12 doesn't
+#
+log_must zpool initialize -c $TESTPOOL $DISK3
+proc_must_exist $init12_pid
+bkgrnd_proc_succeeded $init3_pid
+
+# Cancel last initialization started by init12, make sure it returns.
+log_must zpool initialize -c $TESTPOOL $DISK2
+bkgrnd_proc_succeeded $init12_pid
+
+log_pass "'zpool initialize -w' works."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_multiple.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_multiple.ksh
new file mode 100755
index 000000000..b17ea7ff5
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_multiple.ksh
@@ -0,0 +1,83 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when waiting for mulitple activities.
+#
+# STRATEGY:
+# 1. Create a pool with some data.
+# 2. Alterate running two different activities (scrub and initialize),
+# making sure that they overlap such that one of the two is always
+# running.
+# 3. Wait for both activities with a single invocation of zpool wait.
+# 4. Check that zpool wait doesn't return until both activities have
+# stopped.
+#
+
+function cleanup
+{
+ kill_if_running $pid
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+
+ [[ "$default_chunk_sz" ]] && log_must set_tunable64 \
+ zfs_initialize_chunk_size $default_chunk_sz
+ log_must set_tunable32 zfs_scan_suspend_progress 0
+}
+
+typeset pid default_chunk_sz
+
+log_onexit cleanup
+
+log_must zpool create -f $TESTPOOL $DISK1
+log_must dd if=/dev/urandom of="/$TESTPOOL/testfile" bs=64k count=1k
+
+default_chunk_sz=$(get_tunable zfs_initialize_chunk_size)
+log_must set_tunable64 zfs_initialize_chunk_size 512
+log_must set_tunable32 zfs_scan_suspend_progress 1
+
+log_must zpool scrub $TESTPOOL
+
+log_bkgrnd zpool wait -t scrub,initialize $TESTPOOL
+pid=$!
+
+log_must sleep 2
+
+log_must zpool initialize $TESTPOOL $DISK1
+log_must zpool scrub -s $TESTPOOL
+
+log_must sleep 2
+
+log_must zpool scrub $TESTPOOL
+log_must zpool initialize -s $TESTPOOL $DISK1
+
+log_must sleep 2
+
+log_must zpool initialize $TESTPOOL $DISK1
+log_must zpool scrub -s $TESTPOOL
+
+log_must sleep 2
+
+proc_must_exist $pid
+
+# Cancel last activity, zpool wait should return
+log_must zpool initialize -s $TESTPOOL $DISK1
+bkgrnd_proc_succeeded $pid
+
+log_pass "'zpool wait' works when waiting for mutliple activities."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_no_activity.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_no_activity.ksh
new file mode 100755
index 000000000..ebe38b45d
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_no_activity.ksh
@@ -0,0 +1,52 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' returns immediately when there is no activity in progress.
+#
+# STRATEGY:
+# 1. Create an empty pool with no activity
+# 2. Run zpool wait with various acitivies, make sure it always returns
+# promptly
+#
+
+function cleanup {
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+}
+
+typeset -r TIMEOUT_SECS=1
+
+log_onexit cleanup
+log_must zpool create $TESTPOOL $DISK1
+
+# Wait for each activity
+typeset activities=(free discard initialize replace remove resilver scrub)
+for activity in ${activities[@]}; do
+ log_must timeout $TIMEOUT_SECS zpool wait -t $activity $TESTPOOL
+done
+
+# Wait for multiple activities at the same time
+log_must timeout $TIMEOUT_SECS zpool wait -t scrub,initialize $TESTPOOL
+log_must timeout $TIMEOUT_SECS zpool wait -t free,remove,discard $TESTPOOL
+
+# Wait for all activities at the same time
+log_must timeout $TIMEOUT_SECS zpool wait $TESTPOOL
+
+log_pass "'zpool wait' returns immediately when no activity is in progress."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove.ksh
new file mode 100755
index 000000000..7d089aee3
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove.ksh
@@ -0,0 +1,85 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when waiting for a device to be removed.
+#
+# STRATEGY:
+# 1. Create a pool with two disks and some data.
+# 2. Modify a tunable to make sure removal doesn't make any progress.
+# 3. Start removing one of the disks.
+# 4. Start 'zpool wait'.
+# 5. Sleep for a few seconds and check that the process is actually waiting.
+# 6. Modify tunable to allow removal to complete.
+# 7. Monitor the waiting process to make sure it returns neither too soon nor
+# too late.
+# 8. Repeat 1-7, except using the '-w' flag for 'zpool remove' instead of using
+# 'zpool wait'.
+#
+
+function cleanup
+{
+ kill_if_running $pid
+ log_must set_tunable32 zfs_removal_suspend_progress 0
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+}
+
+function do_test
+{
+ typeset use_flag=$1
+
+ log_must zpool create -f $TESTPOOL $DISK1 $DISK2
+ log_must dd if=/dev/urandom of="/$TESTPOOL/testfile" bs=1k count=16k
+
+ # Start removal, but don't allow it to make any progress at first
+ log_must set_tunable32 zfs_removal_suspend_progress 1
+
+ if $use_flag; then
+ log_bkgrnd zpool remove -w $TESTPOOL $DISK1
+ pid=$!
+
+ while ! is_pool_removing $TESTPOOL && proc_exists $pid; do
+ log_must sleep .5
+ done
+ else
+ log_must zpool remove $TESTPOOL $DISK1
+ log_bkgrnd zpool wait -t remove $TESTPOOL
+ pid=$!
+ fi
+
+ # Make sure the 'zpool wait' is actually waiting
+ log_must sleep 3
+ proc_must_exist $pid
+
+ # Unpause removal, and wait for it to finish
+ log_must set_tunable32 zfs_removal_suspend_progress 0
+ check_while_waiting $pid "is_pool_removing $TESTPOOL"
+
+ log_must zpool destroy $TESTPOOL
+}
+
+log_onexit cleanup
+
+typeset pid
+
+do_test true
+do_test false
+
+log_pass "'zpool wait -t remove' and 'zpool remove -w' work."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove_cancel.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove_cancel.ksh
new file mode 100755
index 000000000..42bef8b9f
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_remove_cancel.ksh
@@ -0,0 +1,62 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' works when device removal is canceled.
+#
+# STRATEGY:
+# 1. Create a pool with two disks and some data.
+# 2. Modify a tunable to make sure removal won't complete while test is running.
+# 3. Start removing one of the disks.
+# 4. Start 'zpool wait'.
+# 5. Sleep for a few seconds and check that the process is actually waiting.
+# 6. Cancel the removal of the device.
+# 7. Check that the wait process returns reasonably promptly.
+#
+
+function cleanup
+{
+ kill_if_running $pid
+ log_must set_tunable32 zfs_removal_suspend_progress 0
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+}
+
+log_onexit cleanup
+
+typeset pid
+
+log_must zpool create -f $TESTPOOL $DISK1 $DISK2
+
+log_must dd if=/dev/urandom of="/$TESTPOOL/testfile" bs=1k count=16k
+
+# Start removal, but don't allow it to make any progress
+log_must set_tunable32 zfs_removal_suspend_progress 1
+log_must zpool remove $TESTPOOL $DISK1
+
+log_bkgrnd zpool wait -t remove $TESTPOOL
+pid=$!
+
+log_must sleep 3
+proc_must_exist $pid
+
+log_must zpool remove -s $TESTPOOL
+bkgrnd_proc_succeeded $pid
+
+log_pass "'zpool wait -t remove' works when removal is canceled."
diff --git a/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_usage.ksh b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_usage.ksh
new file mode 100755
index 000000000..2d6f89709
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_root/zpool_wait/zpool_wait_usage.ksh
@@ -0,0 +1,47 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2018 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+. $STF_SUITE/tests/functional/cli_root/zpool_wait/zpool_wait.kshlib
+
+#
+# DESCRIPTION:
+# 'zpool wait' behaves sensibly when invoked incorrectly.
+#
+# STRATEGY:
+# 1. Invoke 'zpool wait' incorrectly and check that it exits with a non-zero
+# status.
+# 2. Invoke 'zpool wait' with missing or bad arguments and check that it prints
+# some sensible error message.
+#
+
+function cleanup {
+ poolexists $TESTPOOL && destroy_pool $TESTPOOL
+}
+
+log_onexit cleanup
+log_must zpool create $TESTPOOL $DISK1
+
+log_mustnot zpool wait
+
+zpool wait 2>&1 | grep -i usage || \
+ log_fail "Usage message did not contain the word 'usage'."
+zpool wait -t scrub fakepool 2>&1 | grep -i 'no such pool' || \
+ log_fail "Error message did not contain phrase 'no such pool'."
+zpool wait -t foo $TESTPOOL 2>&1 | grep -i 'invalid activity' || \
+ log_fail "Error message did not contain phrase 'invalid activity'."
+
+log_pass "'zpool wait' behaves sensibly when invoked incorrectly."
diff --git a/tests/zfs-tests/tests/functional/cli_user/misc/Makefile.am b/tests/zfs-tests/tests/functional/cli_user/misc/Makefile.am
index 49138d927..2d38e6577 100644
--- a/tests/zfs-tests/tests/functional/cli_user/misc/Makefile.am
+++ b/tests/zfs-tests/tests/functional/cli_user/misc/Makefile.am
@@ -45,7 +45,8 @@ dist_pkgdata_SCRIPTS = \
zpool_upgrade_001_neg.ksh \
arcstat_001_pos.ksh \
arc_summary_001_pos.ksh \
- arc_summary_002_neg.ksh
+ arc_summary_002_neg.ksh \
+ zpool_wait_privilege.ksh
dist_pkgdata_DATA = \
misc.cfg
diff --git a/tests/zfs-tests/tests/functional/cli_user/misc/zpool_wait_privilege.ksh b/tests/zfs-tests/tests/functional/cli_user/misc/zpool_wait_privilege.ksh
new file mode 100755
index 000000000..42a2dd2c6
--- /dev/null
+++ b/tests/zfs-tests/tests/functional/cli_user/misc/zpool_wait_privilege.ksh
@@ -0,0 +1,35 @@
+#!/bin/ksh -p
+#
+# This file and its contents are supplied under the terms of the
+# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source. A copy of the CDDL is also available via the Internet at
+# http://www.illumos.org/license/CDDL.
+#
+
+#
+# Copyright (c) 2019 by Delphix. All rights reserved.
+#
+
+. $STF_SUITE/include/libtest.shlib
+
+#
+# DESCRIPTION:
+#
+# zpool wait works when run as an unprivileged user
+#
+
+verify_runnable "global"
+
+log_must zpool wait $TESTPOOL
+
+# Make sure printing status works as unprivileged user.
+output=$(zpool wait -H $TESTPOOL 1) || \
+ log_fail "'zpool wait -H $TESTPOOL 1' failed"
+# There should be one line of status output in a pool with no activity.
+log_must eval '[[ $(wc -l <<<$output) -ge 1 ]]'
+
+log_pass "zpool wait works when run as a user"
diff --git a/tests/zfs-tests/tests/functional/events/events_002_pos.ksh b/tests/zfs-tests/tests/functional/events/events_002_pos.ksh
index 495b2bbad..76ad6237f 100755
--- a/tests/zfs-tests/tests/functional/events/events_002_pos.ksh
+++ b/tests/zfs-tests/tests/functional/events/events_002_pos.ksh
@@ -81,9 +81,7 @@ log_must truncate -s 0 $ZED_DEBUG_LOG
# 4. Generate additional events.
log_must zpool offline $MPOOL $VDEV1
log_must zpool online $MPOOL $VDEV1
-while ! is_pool_resilvered $MPOOL; do
- sleep 1
-done
+log_must zpool wait -t resilver $MPOOL
log_must zpool scrub $MPOOL
diff --git a/tests/zfs-tests/tests/functional/online_offline/online_offline_002_neg.ksh b/tests/zfs-tests/tests/functional/online_offline/online_offline_002_neg.ksh
index 99b9d6bf1..19576a821 100755
--- a/tests/zfs-tests/tests/functional/online_offline/online_offline_002_neg.ksh
+++ b/tests/zfs-tests/tests/functional/online_offline/online_offline_002_neg.ksh
@@ -90,10 +90,7 @@ while [[ $i -lt ${#disks[*]} ]]; do
log_must zpool online $TESTPOOL ${disks[$i]}
check_state $TESTPOOL ${disks[$i]} "online" || \
log_fail "Failed to set ${disks[$i]} online"
- # Delay for resilver to complete
- while ! is_pool_resilvered $TESTPOOL; do
- log_must sleep 1
- done
+ log_must zpool wait -t resilver $TESTPOOL
log_must zpool clear $TESTPOOL
while [[ $j -lt ${#disks[*]} ]]; do
if [[ $j -eq $i ]]; then
@@ -125,10 +122,7 @@ while [[ $i -lt ${#disks[*]} ]]; do
log_must zpool online $TESTPOOL ${disks[$i]}
check_state $TESTPOOL ${disks[$i]} "online" || \
log_fail "Failed to set ${disks[$i]} online"
- # Delay for resilver to complete
- while ! is_pool_resilvered $TESTPOOL; do
- log_must sleep 1
- done
+ log_must zpool wait -t resilver $TESTPOOL
log_must zpool clear $TESTPOOL
fi
((i++))
diff --git a/tests/zfs-tests/tests/functional/redundancy/redundancy.kshlib b/tests/zfs-tests/tests/functional/redundancy/redundancy.kshlib
index ab36d00de..9bf2df0d1 100644
--- a/tests/zfs-tests/tests/functional/redundancy/redundancy.kshlib
+++ b/tests/zfs-tests/tests/functional/redundancy/redundancy.kshlib
@@ -229,14 +229,7 @@ function replace_missing_devs
log_must gnudd if=/dev/zero of=$vdev \
bs=1024k count=$(($MINDEVSIZE / (1024 * 1024))) \
oflag=fdatasync
- log_must zpool replace -f $pool $vdev $vdev
- while true; do
- if ! is_pool_resilvered $pool ; then
- log_must sleep 2
- else
- break
- fi
- done
+ log_must zpool replace -wf $pool $vdev $vdev
done
}
diff --git a/tests/zfs-tests/tests/functional/redundancy/redundancy_004_neg.ksh b/tests/zfs-tests/tests/functional/redundancy/redundancy_004_neg.ksh
index 01b819dc6..cb4603af8 100755
--- a/tests/zfs-tests/tests/functional/redundancy/redundancy_004_neg.ksh
+++ b/tests/zfs-tests/tests/functional/redundancy/redundancy_004_neg.ksh
@@ -54,12 +54,7 @@ typeset -i cnt=$(random 2 5)
setup_test_env $TESTPOOL "" $cnt
damage_devs $TESTPOOL 1 "keep_label"
-log_must zpool scrub $TESTPOOL
-
-# Wait for the scrub to wrap, or is_healthy will be wrong.
-while ! is_pool_scrubbed $TESTPOOL; do
- sleep 1
-done
+log_must zpool scrub -w $TESTPOOL
log_mustnot is_healthy $TESTPOOL
diff --git a/tests/zfs-tests/tests/functional/removal/removal.kshlib b/tests/zfs-tests/tests/functional/removal/removal.kshlib
index fa0174db0..360b06d53 100644
--- a/tests/zfs-tests/tests/functional/removal/removal.kshlib
+++ b/tests/zfs-tests/tests/functional/removal/removal.kshlib
@@ -28,9 +28,7 @@ function wait_for_removal # pool
typeset pool=$1
typeset callback=$2
- while is_pool_removing $pool; do
- sleep 1
- done
+ log_must zpool wait -t remove $pool
#
# The pool state changes before the TXG finishes syncing; wait for
diff --git a/tests/zfs-tests/tests/functional/removal/removal_with_errors.ksh b/tests/zfs-tests/tests/functional/removal/removal_with_errors.ksh
index 2ef56706a..cb836e92b 100755
--- a/tests/zfs-tests/tests/functional/removal/removal_with_errors.ksh
+++ b/tests/zfs-tests/tests/functional/removal/removal_with_errors.ksh
@@ -64,9 +64,7 @@ function wait_for_removing_cancel
{
typeset pool=$1
- while is_pool_removing $pool; do
- sleep 1
- done
+ log_must zpool wait -t remove $pool
#
# The pool state changes before the TXG finishes syncing; wait for