diff options
author | Matthew Ahrens <[email protected]> | 2020-09-17 10:55:30 -0700 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2020-09-18 12:38:24 -0700 |
commit | 66ccc9b75f4beb6f5e50d64f0ff42d66f0452b61 (patch) | |
tree | 45cb58257749bc29656628755eb1c5b2cdc81a05 /cmd | |
parent | 7b86ad215ec11d300df59a380ad0f7fb21cc2357 (diff) |
zdb leak detection fails with in-progress device removal
When a device removal is in progress, there are 2 locations for the data
that's already been moved: the original location, on the device that's
being removed; and the new location, which is pointed to by the indirect
mapping. When doing leak detection, zdb needs to know about both
locations. To determine what's already been copied, we load the
spacemaps of the removing vdev, omit the blocks that are yet to be
copied, and then use the vdev's remap op to find the new location.
The problem is with an optimization to the spacemap-loading code in zdb.
When processing the log spacemaps, we ignore entries that are not
relevant because they are past the point that's been copied. However,
entries which span the point that's been copied (i.e. they are partly
relevant and partly irrelevant) are processed normally. This can lead
to an illegal spacemap operation, for example if offsets up to 100KB
have been copied, and the spacemap log has the following entries:
ALLOC 50KB-150KB (partly relevant)
FREE 50KB-100KB (entirely relevant)
FREE 100KB-150KB (entirely irrlevant - ignored)
ALLOC 50KB-150KB (partly relevant)
Because the entirely irrelevant entry was ignored, its space remains in
the spacemap. When the last entry is processed, we attempt to add it to
the spacemap, but it partially overlaps with the 100-150KB entry that
was left over.
This problem was discovered by ztest/zloop.
One solution would be to also ignore the irrelevant parts of
partially-irrelevant entries (i.e. when processing the ALLOC 50-150, to
only add 50-100 to the spacemap). However, this commit implements a
simpler solution, which is to remove this optimization entirely. I.e.
to process the entire spacemap log, without regard for the point that's
been copied. After reconstructing the entire allocatable range tree,
there's already code to remove the parts that have not yet been copied.
Reviewed-by: Serapheim Dimitropoulos <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Matthew Ahrens <[email protected]>
External-issue: DLPX-71820
Closes #10920
Diffstat (limited to 'cmd')
-rw-r--r-- | cmd/zdb/zdb.c | 8 |
1 files changed, 0 insertions, 8 deletions
diff --git a/cmd/zdb/zdb.c b/cmd/zdb/zdb.c index e7211711a..c070a1f8c 100644 --- a/cmd/zdb/zdb.c +++ b/cmd/zdb/zdb.c @@ -5340,11 +5340,6 @@ load_unflushed_svr_segs_cb(spa_t *spa, space_map_entry_t *sme, if (txg < metaslab_unflushed_txg(ms)) return (0); - vdev_indirect_mapping_t *vim = vd->vdev_indirect_mapping; - ASSERT(vim != NULL); - if (offset >= vdev_indirect_mapping_max_offset(vim)) - return (0); - if (sme->sme_type == SM_ALLOC) range_tree_add(svr->svr_allocd_segs, offset, size); else @@ -5407,9 +5402,6 @@ zdb_claim_removing(spa_t *spa, zdb_cb_t *zcb) for (uint64_t msi = 0; msi < vd->vdev_ms_count; msi++) { metaslab_t *msp = vd->vdev_ms[msi]; - if (msp->ms_start >= vdev_indirect_mapping_max_offset(vim)) - break; - ASSERT0(range_tree_space(allocs)); if (msp->ms_sm != NULL) VERIFY0(space_map_load(msp->ms_sm, allocs, SM_ALLOC)); |