diff options
author | Serapheim Dimitropoulos <[email protected]> | 2022-12-09 10:48:33 -0800 |
---|---|---|
committer | GitHub <[email protected]> | 2022-12-09 10:48:33 -0800 |
commit | 7bf4c97a3696663959d1891c5890b2667761dd58 (patch) | |
tree | 8ef2996a93a134a1bed2b2f57467ec7dc2e5c9b2 /module/zfs/metaslab.c | |
parent | 5f73bbba436748654bf4fd6596c9cdba1a9914f3 (diff) |
Bypass metaslab throttle for removal allocations
Context:
We recently had a scenario where a customer with 2x10TB disks at 95+%
fragmentation and capacity, wanted to migrate their disks to a 2x20TB
setup. So they added the 2 new disks and submitted the removal of the
first 10TB disk. The removal took a lot more than expected (order of
more than a week to 2 weeks vs a couple of days) and once it was done it
generated a huge indirect mappign table in RAM (~16GB vs expected ~1GB).
Root-Cause:
The removal code calls `metaslab_alloc_dva()` to allocate a new block
for each evacuating block in the removing device and it tries to batch
them into 16MB segments. If it can't find such a segment it tries for
8MBs, 4MBs, all the way down to 512 bytes.
In our scenario what would happen is that `metaslab_alloc_dva()` from
the removal thread pick the new devices initially but wouldn't allocate
from them because of throttling in their metaslab allocation queue's
depth (see `metaslab_group_allocatable()`) as these devices are new and
favored for most types of allocations because of their free space. So
then the removal thread would look at the old fragmented disk for
allocations and wouldn't find any contiguous space and finally retry
with a smaller allocation size until it would to the low KB range. This
caused a lot of small mappings to be generated blowing up the size of
the indirect table. It also wasted a lot of CPU while the removal was
active making everything slow.
This patch:
Make all allocations coming from the device removal thread bypass the
throttle checks. These allocations are not even counted in the metaslab
allocation queues anyway so why check them?
Side-Fix:
Allocations with METASLAB_DONT_THROTTLE in their flags would not be
accounted at the throttle queues but they'd still abide by the
throttling rules which seems wrong. This patch fixes this by checking
for that flag in `metaslab_group_allocatable()`. I did a quick check to
see where else this flag is used and it doesn't seem like this change
would cause issues.
Reviewed-by: Matthew Ahrens <[email protected]>
Reviewed-by: Mark Maybee <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Signed-off-by: Serapheim Dimitropoulos <[email protected]>
Closes #14159
Diffstat (limited to 'module/zfs/metaslab.c')
-rw-r--r-- | module/zfs/metaslab.c | 13 |
1 files changed, 11 insertions, 2 deletions
diff --git a/module/zfs/metaslab.c b/module/zfs/metaslab.c index c624833bc..24d52a749 100644 --- a/module/zfs/metaslab.c +++ b/module/zfs/metaslab.c @@ -1223,7 +1223,7 @@ metaslab_group_fragmentation(metaslab_group_t *mg) */ static boolean_t metaslab_group_allocatable(metaslab_group_t *mg, metaslab_group_t *rotor, - uint64_t psize, int allocator, int d) + int flags, uint64_t psize, int allocator, int d) { spa_t *spa = mg->mg_vd->vdev_spa; metaslab_class_t *mc = mg->mg_class; @@ -1268,6 +1268,15 @@ metaslab_group_allocatable(metaslab_group_t *mg, metaslab_group_t *rotor, return (B_FALSE); /* + * Some allocations (e.g., those coming from device removal + * where the * allocations are not even counted in the + * metaslab * allocation queues) are allowed to bypass + * the throttle. + */ + if (flags & METASLAB_DONT_THROTTLE) + return (B_TRUE); + + /* * Relax allocation throttling for ditto blocks. Due to * random imbalances in allocation it tends to push copies * to one vdev, that looks a bit better at the moment. @@ -5188,7 +5197,7 @@ top: */ if (allocatable && !GANG_ALLOCATION(flags) && !try_hard) { allocatable = metaslab_group_allocatable(mg, rotor, - psize, allocator, d); + flags, psize, allocator, d); } if (!allocatable) { |