diff options
author | Giuseppe Di Natale <[email protected]> | 2017-06-09 09:15:37 -0700 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2017-06-09 09:15:37 -0700 |
commit | 1b7c1e5ce90ae27d9bb1f6f3616bf079c168005c (patch) | |
tree | c3f9172ab7cd4039ec660f1e34700eae745e6d6a /module/zfs/zio.c | |
parent | 82644107c4e7f3e899ebde18f65cbac7c604583c (diff) |
OpenZFS 7578 - Fix/improve some aspects of ZIL writing
- After some ZIL changes 6 years ago zil_slog_limit got partially broken
due to zl_itx_list_sz not updated when async itx'es upgraded to sync.
Actually because of other changes about that time zl_itx_list_sz is not
really required to implement the functionality, so this patch removes
some unneeded broken code and variables.
- Original idea of zil_slog_limit was to reduce chance of SLOG abuse by
single heavy logger, that increased latency for other (more latency critical)
loggers, by pushing heavy log out into the main pool instead of SLOG. Beside
huge latency increase for heavy writers, this implementation caused double
write of all data, since the log records were explicitly prepared for SLOG.
Since we now have I/O scheduler, I've found it can be much more efficient
to reduce priority of heavy logger SLOG writes from ZIO_PRIORITY_SYNC_WRITE
to ZIO_PRIORITY_ASYNC_WRITE, while still leave them on SLOG.
- Existing ZIL implementation had problem with space efficiency when it
has to write large chunks of data into log blocks of limited size. In some
cases efficiency stopped to almost as low as 50%. In case of ZIL stored on
spinning rust, that also reduced log write speed in half, since head had to
uselessly fly over allocated but not written areas. This change improves
the situation by offloading problematic operations from z*_log_write() to
zil_lwb_commit(), which knows real situation of log blocks allocation and
can split large requests into pieces much more efficiently. Also as side
effect it removes one of two data copy operations done by ZIL code WR_COPIED
case.
- While there, untangle and unify code of z*_log_write() functions.
Also zfs_log_write() alike to zvol_log_write() can now handle writes crossing
block boundary, that may also improve efficiency if ZPL is made to do that.
Sponsored by: iXsystems, Inc.
Authored by: Alexander Motin <[email protected]>
Reviewed by: Matthew Ahrens <[email protected]>
Reviewed by: Prakash Surya <[email protected]>
Reviewed by: Andriy Gapon <[email protected]>
Reviewed by: Steven Hartland <[email protected]>
Reviewed by: Brad Lewis <[email protected]>
Reviewed by: Richard Elling <[email protected]>
Approved by: Robert Mustacchi <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Ported-by: Giuseppe Di Natale <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/7578
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/aeb13ac
Closes #6191
Diffstat (limited to 'module/zfs/zio.c')
-rw-r--r-- | module/zfs/zio.c | 17 |
1 files changed, 8 insertions, 9 deletions
diff --git a/module/zfs/zio.c b/module/zfs/zio.c index 61eb575ef..acfc49eb5 100644 --- a/module/zfs/zio.c +++ b/module/zfs/zio.c @@ -3098,7 +3098,7 @@ zio_dva_unallocate(zio_t *zio, zio_gang_node_t *gn, blkptr_t *bp) */ int zio_alloc_zil(spa_t *spa, uint64_t txg, blkptr_t *new_bp, uint64_t size, - boolean_t use_slog) + boolean_t *slog) { int error = 1; zio_alloc_list_t io_alloc_list; @@ -3106,17 +3106,16 @@ zio_alloc_zil(spa_t *spa, uint64_t txg, blkptr_t *new_bp, uint64_t size, ASSERT(txg > spa_syncing_txg(spa)); metaslab_trace_init(&io_alloc_list); - - if (use_slog) { - error = metaslab_alloc(spa, spa_log_class(spa), size, - new_bp, 1, txg, NULL, METASLAB_FASTWRITE, - &io_alloc_list, NULL); - } - - if (error) { + error = metaslab_alloc(spa, spa_log_class(spa), size, new_bp, 1, + txg, NULL, METASLAB_FASTWRITE, &io_alloc_list, NULL); + if (error == 0) { + *slog = TRUE; + } else { error = metaslab_alloc(spa, spa_normal_class(spa), size, new_bp, 1, txg, NULL, METASLAB_FASTWRITE, &io_alloc_list, NULL); + if (error == 0) + *slog = FALSE; } metaslab_trace_fini(&io_alloc_list); |