diff options
author | Matthew Ahrens <[email protected]> | 2020-09-17 10:55:30 -0700 |
---|---|---|
committer | GitHub <[email protected]> | 2020-09-17 10:55:30 -0700 |
commit | a57f954226ce1111a022d1fa615e555c5e735a35 (patch) | |
tree | 61472808641b479165445e89471bb147ce78b70a /cmd | |
parent | 3c7566cb0dce96e612fd14b69b66726b4522da09 (diff) |
zdb leak detection fails with in-progress device removal
When a device removal is in progress, there are 2 locations for the data
that's already been moved: the original location, on the device that's
being removed; and the new location, which is pointed to by the indirect
mapping. When doing leak detection, zdb needs to know about both
locations. To determine what's already been copied, we load the
spacemaps of the removing vdev, omit the blocks that are yet to be
copied, and then use the vdev's remap op to find the new location.
The problem is with an optimization to the spacemap-loading code in zdb.
When processing the log spacemaps, we ignore entries that are not
relevant because they are past the point that's been copied. However,
entries which span the point that's been copied (i.e. they are partly
relevant and partly irrelevant) are processed normally. This can lead
to an illegal spacemap operation, for example if offsets up to 100KB
have been copied, and the spacemap log has the following entries:
ALLOC 50KB-150KB (partly relevant)
FREE 50KB-100KB (entirely relevant)
FREE 100KB-150KB (entirely irrlevant - ignored)
ALLOC 50KB-150KB (partly relevant)
Because the entirely irrelevant entry was ignored, its space remains in
the spacemap. When the last entry is processed, we attempt to add it to
the spacemap, but it partially overlaps with the 100-150KB entry that
was left over.
This problem was discovered by ztest/zloop.
One solution would be to also ignore the irrelevant parts of
partially-irrelevant entries (i.e. when processing the ALLOC 50-150, to
only add 50-100 to the spacemap). However, this commit implements a
simpler solution, which is to remove this optimization entirely. I.e.
to process the entire spacemap log, without regard for the point that's
been copied. After reconstructing the entire allocatable range tree,
there's already code to remove the parts that have not yet been copied.
Reviewed-by: Serapheim Dimitropoulos <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Matthew Ahrens <[email protected]>
External-issue: DLPX-71820
Closes #10920
Diffstat (limited to 'cmd')
-rw-r--r-- | cmd/zdb/zdb.c | 8 |
1 files changed, 0 insertions, 8 deletions
diff --git a/cmd/zdb/zdb.c b/cmd/zdb/zdb.c index fcceedfe5..24ce43505 100644 --- a/cmd/zdb/zdb.c +++ b/cmd/zdb/zdb.c @@ -5342,11 +5342,6 @@ load_unflushed_svr_segs_cb(spa_t *spa, space_map_entry_t *sme, if (txg < metaslab_unflushed_txg(ms)) return (0); - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - ASSERT(vim != NULL); - if (offset >= vdev_indirect_mapping_max_offset(vim)) - return (0); - if (sme->sme_type == SM_ALLOC) range_tree_add(svr->svr_allocd_segs, offset, size); else @@ -5409,9 +5404,6 @@ zdb_claim_removing(spa_t *spa, zdb_cb_t *zcb) for (uint64_t msi = 0; msi < vd->vdev_ms_count; msi++) { metaslab_t *msp = vd->vdev_ms[msi]; - if (msp->ms_start >= vdev_indirect_mapping_max_offset(vim)) - break; - ASSERT0(range_tree_space(allocs)); if (msp->ms_sm != NULL) VERIFY0(space_map_load(msp->ms_sm, allocs, SM_ALLOC)); |