diff options
author | Tom Caputi <[email protected]> | 2017-11-15 20:27:01 -0500 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2017-11-15 17:27:01 -0800 |
commit | d4a72f23863382bdf6d0ae33196f5b5decbc48fd (patch) | |
tree | 1084ea930b9a1ef46e58d1757943ab3ad66c22c4 /include/sys/spa_impl.h | |
parent | e301113c17673a290098850830cf2e6d1a1fcbe3 (diff) |
Sequential scrub and resilvers
Currently, scrubs and resilvers can take an extremely
long time to complete. This is largely due to the fact
that zfs scans process pools in logical order, as
determined by each block's bookmark. This makes sense
from a simplicity perspective, but blocks in zfs are
often scattered randomly across disks, particularly
due to zfs's copy-on-write mechanisms.
This patch improves performance by splitting scrubs
and resilvers into a metadata scanning phase and an IO
issuing phase. The metadata scan reads through the
structure of the pool and gathers an in-memory queue
of I/Os, sorted by size and offset on disk. The issuing
phase will then issue the scrub I/Os as sequentially as
possible, greatly improving performance.
This patch also updates and cleans up some of the scan
code which has not been updated in several years.
Reviewed-by: Brian Behlendorf <[email protected]>
Authored-by: Saso Kiselkov <[email protected]>
Authored-by: Alek Pinchuk <[email protected]>
Authored-by: Tom Caputi <[email protected]>
Signed-off-by: Tom Caputi <[email protected]>
Closes #3625
Closes #6256
Diffstat (limited to 'include/sys/spa_impl.h')
-rw-r--r-- | include/sys/spa_impl.h | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/include/sys/spa_impl.h b/include/sys/spa_impl.h index 926a0bc24..2fc598016 100644 --- a/include/sys/spa_impl.h +++ b/include/sys/spa_impl.h @@ -185,9 +185,9 @@ struct spa { uberblock_t spa_ubsync; /* last synced uberblock */ uberblock_t spa_uberblock; /* current uberblock */ boolean_t spa_extreme_rewind; /* rewind past deferred frees */ - uint64_t spa_last_io; /* lbolt of last non-scan I/O */ kmutex_t spa_scrub_lock; /* resilver/scrub lock */ - uint64_t spa_scrub_inflight; /* in-flight scrub I/Os */ + uint64_t spa_scrub_inflight; /* in-flight scrub bytes */ + uint64_t spa_load_verify_ios; /* in-flight verification IOs */ kcondvar_t spa_scrub_io_cv; /* scrub I/O completion */ uint8_t spa_scrub_active; /* active or suspended? */ uint8_t spa_scrub_type; /* type of scrub we're doing */ @@ -198,6 +198,7 @@ struct spa { uint64_t spa_scan_pass_scrub_pause; /* scrub pause time */ uint64_t spa_scan_pass_scrub_spent_paused; /* total paused */ uint64_t spa_scan_pass_exam; /* examined bytes per pass */ + uint64_t spa_scan_pass_issued; /* issued bytes per pass */ kmutex_t spa_async_lock; /* protect async state */ kthread_t *spa_async_thread; /* thread doing async task */ int spa_async_suspended; /* async tasks suspended */ |