diff options
author | Paul Dagnelie <pcd@delphix.com> | 2019-06-06 19:10:43 -0700 |
---|---|---|
committer | Brian Behlendorf <behlendorf1@llnl.gov> | 2019-06-06 19:10:43 -0700 |
commit | 893a6d62c1895f3e3eeb660b048236571995a564 (patch) | |
tree | 051154a79d6a6cc07ba4e93ed60a1b6a7f5b0763 /include/sys/metaslab.h | |
parent | 876d76be3455ba6aa8d1567203847d8c012d05c9 (diff) |
Allow metaslab to be unloaded even when not freed from
On large systems, the memory used by loaded metaslabs can become
a concern. While range trees are a fairly efficient data structure,
on heavily fragmented pools they can still consume a significant
amount of memory. This problem is amplified when we fail to unload
metaslabs that we aren't using. Currently, we only unload a metaslab
during metaslab_sync_done; in order for that function to be called
on a given metaslab in a given txg, we have to have dirtied that
metaslab in that txg. If the dirtying was the result of an allocation,
we wouldn't be unloading it (since it wouldn't be 8 txgs since it
was selected), so in effect we only unload a metaslab during txgs
where it's being freed from.
We move the unload logic from sync_done to a new function, and
call that function on all metaslabs in a given vdev during
vdev_sync_done().
Reviewed-by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Dagnelie <pcd@delphix.com>
Closes #8837
Diffstat (limited to 'include/sys/metaslab.h')
-rw-r--r-- | include/sys/metaslab.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/sys/metaslab.h b/include/sys/metaslab.h index 2790d06c7..330902529 100644 --- a/include/sys/metaslab.h +++ b/include/sys/metaslab.h @@ -50,6 +50,7 @@ int metaslab_init(metaslab_group_t *, uint64_t, uint64_t, uint64_t, void metaslab_fini(metaslab_t *); int metaslab_load(metaslab_t *); +void metaslab_potentially_unload(metaslab_t *, uint64_t); void metaslab_unload(metaslab_t *); uint64_t metaslab_allocated_space(metaslab_t *); |