diff options
author | LOLi <[email protected]> | 2017-04-11 00:28:21 +0200 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2017-04-10 15:28:21 -0700 |
commit | 047187c1bd4a893e7a89e8795fa8f4ecc3eb0732 (patch) | |
tree | 6baa77392873595ceca7cc3f27188ddcad7fd79b /etc/systemd | |
parent | 8542ef852aabf63e8a951aa2a8dfd612b0fea597 (diff) |
Fix size inflation in spa_get_worst_case_asize()
When we try assign a new transaction to a TXG we must know beforehand
if there is sufficient free space on disk. This is to decide,
in dmu_tx_assign(), if we should reject the TX with ENOSPC.
We rely on spa_get_worst_case_asize() to inflate the size of our
logical writes by a factor of spa_asize_inflation which is
calculated as:
(VDEV_RAIDZ_MAXPARITY + 1) * SPA_DVAS_PER_BP * 2 == 24
The problem with the current implementation is that we don't take
into account what happens with very small writes on VDEVs with large
physical block sizes.
Consider the case of writes to a dataset with recordsize=512,
copies=3 on a VDEV with ashift=13 (usually SSD with 8K block size):
every logical IO will end up allocating 3 * 8K = 24K on disk, so 512
bytes multiplied by 48, which is double the size we account for.
If we allow this kind of writes to be assigned a TX it is possible,
when the pool is almost full, to trigger an allocation failure
(ENOSPC) in the ZIO pipeline, which will in turn result in the whole
pool being suspended.
The bug is fixed by using, in spa_get_worst_case_asize(), the MAX()
value chosen between the logical io size from zfs_write() and the
maximum physical block size used among our VDEVs.
Reviewed by: Matthew Ahrens <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: loli10K <[email protected]>
Closes #5941
Diffstat (limited to 'etc/systemd')
0 files changed, 0 insertions, 0 deletions