aboutsummaryrefslogtreecommitdiffstats
path: root/include/sys
diff options
context:
space:
mode:
authorAlexander Motin <[email protected]>2023-01-25 14:30:24 -0500
committerGitHub <[email protected]>2023-01-25 11:30:24 -0800
commitdc5c8006f684b1df3f2d4b6b8c121447d2db0017 (patch)
treee0a48245fc28b5d55d3d5266f56a90e359424bac /include/sys
parentc85ac731a0ec16e4277857b55ebe123c552365b6 (diff)
Prefetch on deadlists merge
During snapshot deletion ZFS may issue several reads for each deadlist to merge them into next snapshot's or pool's bpobj. Number of the dead lists increases with number of snapshots. On HDD pools it may take significant time during which sync thread is blocked. This patch introduces prescient prefetch of required blocks for up to 128 deadlists ahead. Tests show reduction of time required to delete dataset with 720 snapshots with randomly overwritten file on wide HDD pool from 75-85 to 22-28 seconds. Reviewed-by: Allan Jude <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc. Issue #14276 Closes #14402
Diffstat (limited to 'include/sys')
-rw-r--r--include/sys/bpobj.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/sys/bpobj.h b/include/sys/bpobj.h
index 84f0ee76c..f3384f526 100644
--- a/include/sys/bpobj.h
+++ b/include/sys/bpobj.h
@@ -87,6 +87,7 @@ int livelist_bpobj_iterate_from_nofree(bpobj_t *bpo, bpobj_itor_t func,
void *arg, int64_t start);
void bpobj_enqueue_subobj(bpobj_t *bpo, uint64_t subobj, dmu_tx_t *tx);
+void bpobj_prefetch_subobj(bpobj_t *bpo, uint64_t subobj);
void bpobj_enqueue(bpobj_t *bpo, const blkptr_t *bp, boolean_t bp_freed,
dmu_tx_t *tx);