diff options
author | Matthew Ahrens <[email protected]> | 2019-06-10 11:48:42 -0700 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2019-06-10 11:48:42 -0700 |
commit | b8738257c2607c73c731ce8e0fd73282b266d6ef (patch) | |
tree | d73e90809b9f413b8894d8ee7fc6ad8a11bff7fe /module/zfs/zvol.c | |
parent | 5a902f5aaa1fbf6f7e459ec29f6d1d988ec78b0a (diff) |
make zil max block size tunable
We've observed that on some highly fragmented pools, most metaslab
allocations are small (~2-8KB), but there are some large, 128K
allocations. The large allocations are for ZIL blocks. If there is a
lot of fragmentation, the large allocations can be hard to satisfy.
The most common impact of this is that we need to check (and thus load)
lots of metaslabs from the ZIL allocation code path, causing sync writes
to wait for metaslabs to load, which can take a second or more. In the
worst case, we may not be able to satisfy the allocation, in which case
the ZIL will resort to txg_wait_synced() to ensure the change is on
disk.
To provide a workaround for this, this change adds a tunable that can
reduce the size of ZIL blocks.
External-issue: DLPX-61719
Reviewed-by: George Wilson <[email protected]>
Reviewed-by: Paul Dagnelie <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Matthew Ahrens <[email protected]>
Closes #8865
Diffstat (limited to 'module/zfs/zvol.c')
-rw-r--r-- | module/zfs/zvol.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/module/zfs/zvol.c b/module/zfs/zvol.c index c29f65f67..9aeadb22b 100644 --- a/module/zfs/zvol.c +++ b/module/zfs/zvol.c @@ -684,7 +684,7 @@ zvol_log_write(zvol_state_t *zv, dmu_tx_t *tx, uint64_t offset, itx_wr_state_t wr_state = write_state; ssize_t len = size; - if (wr_state == WR_COPIED && size > ZIL_MAX_COPIED_DATA) + if (wr_state == WR_COPIED && size > zil_max_copied_data(zilog)) wr_state = WR_NEED_COPY; else if (wr_state == WR_INDIRECT) len = MIN(blocksize - P2PHASE(offset, blocksize), size); |