summaryrefslogtreecommitdiffstats
path: root/module/zfs/metaslab.c
diff options
context:
space:
mode:
authorPaul Dagnelie <[email protected]>2019-08-30 09:28:31 -0700
committerBrian Behlendorf <[email protected]>2019-08-30 09:28:31 -0700
commit475aa97cab771b3b2b9ddab03f5c14a1d4e985da (patch)
tree5c98356645033b616a57797736415f93fcc900ab /module/zfs/metaslab.c
parente2fcfa70e36a9f7c059ec64d787f37c6bd9ae48c (diff)
Prevent metaslab_sync panic due to spa_final_dirty_txg
If a pool enables the SPACEMAP_HISTOGRAM feature shortly before being exported, we can enter a situation that causes a kernel panic. Any metaslabs that are loaded during the final dirty txg and haven't already been condensed will cause metaslab_sync to proceed after the final dirty txg so that the condense can be performed, which there are assertions to prevent. Because of the nature of this issue, there are a number of ways we can enter this state. Rather than try to prevent each of them one by one, potentially missing some edge cases, we instead cut it off at the point of intersection; by preventing metaslab_sync from proceeding if it would only do so to perform a condense and we're past the final dirty txg, we preserve the utility of the existing asserts while preventing this particular issue. Reviewed-by: Matt Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Paul Dagnelie <[email protected]> Closes #9185 Closes #9186 Closes #9231 Closes #9253
Diffstat (limited to 'module/zfs/metaslab.c')
-rw-r--r--module/zfs/metaslab.c11
1 files changed, 9 insertions, 2 deletions
diff --git a/module/zfs/metaslab.c b/module/zfs/metaslab.c
index 00af4a21b..11b9ba8e9 100644
--- a/module/zfs/metaslab.c
+++ b/module/zfs/metaslab.c
@@ -3553,12 +3553,19 @@ metaslab_sync(metaslab_t *msp, uint64_t txg)
/*
* Normally, we don't want to process a metaslab if there are no
* allocations or frees to perform. However, if the metaslab is being
- * forced to condense and it's loaded, we need to let it through.
+ * forced to condense, it's loaded and we're not beyond the final
+ * dirty txg, we need to let it through. Not condensing beyond the
+ * final dirty txg prevents an issue where metaslabs that need to be
+ * condensed but were loaded for other reasons could cause a panic
+ * here. By only checking the txg in that branch of the conditional,
+ * we preserve the utility of the VERIFY statements in all other
+ * cases.
*/
if (range_tree_is_empty(alloctree) &&
range_tree_is_empty(msp->ms_freeing) &&
range_tree_is_empty(msp->ms_checkpointing) &&
- !(msp->ms_loaded && msp->ms_condense_wanted))
+ !(msp->ms_loaded && msp->ms_condense_wanted &&
+ txg <= spa_final_dirty_txg(spa)))
return;