diff options
author | Alexander Motin <[email protected]> | 2021-07-26 19:30:20 -0400 |
---|---|---|
committer | GitHub <[email protected]> | 2021-07-26 16:30:20 -0700 |
commit | dd3bda39cf7a9716c1d45dcaba67da7f64116d63 (patch) | |
tree | 77ac9903d4cd874e32096d7d618ecf56495c1135 /module/zfs | |
parent | bdd2bfd02c70f42ba99d5d621998cfe6a959cb6b (diff) |
Add comment on metaslab_class_throttle_reserve() locking
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Alexander Motin <[email protected]>
Issue #12314
Closes #12419
Diffstat (limited to 'module/zfs')
-rw-r--r-- | module/zfs/metaslab.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/module/zfs/metaslab.c b/module/zfs/metaslab.c index 93d409ceb..df0d83327 100644 --- a/module/zfs/metaslab.c +++ b/module/zfs/metaslab.c @@ -5617,6 +5617,13 @@ metaslab_class_throttle_reserve(metaslab_class_t *mc, int slots, int allocator, if (GANG_ALLOCATION(flags) || (flags & METASLAB_MUST_RESERVE) || zfs_refcount_count(&mca->mca_alloc_slots) + slots <= max) { /* + * The potential race between _count() and _add() is covered + * by the allocator lock in most cases, or irrelevant due to + * GANG_ALLOCATION() or METASLAB_MUST_RESERVE set in others. + * But even if we assume some other non-existing scenario, the + * worst that can happen is few more I/Os get to allocation + * earlier, that is not a problem. + * * We reserve the slots individually so that we can unreserve * them individually when an I/O completes. */ |