diff options
author | Etienne Dechamps <[email protected]> | 2012-06-28 12:30:07 +0200 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2012-10-17 08:56:46 -0700 |
commit | 5d7a86d114c2706a8d14d94b71f81ad5cdf066c5 (patch) | |
tree | 860efd0241842d2b3749c0d2666d055ef152c7cf /module | |
parent | 920dd524fb2997225d4b1ac180bcbc14b045fda6 (diff) |
Use the slog even with logbias=throughput.
In the current code, logbias=throughput implies the following:
1) All synchronous writes are logged in indirect mode.
2) The slog is not used.
(1) makes sense because it avoids writing the data twice, which is
obviously a good thing when the user wants maximum pool throughput.
(2), however, is a surprising decision. Considering all writes are
indirect, the log record doesn't contain the actual data, only pointers
to DMU blocks. As a result, log records written in logbias=throughput
mode are quite small, and as such, it doesn't make any sense to write
them to the main pool since slogs are usually optimized for small
synchronous writes.
In fact, the current behavior is actually harmful for performance,
because log blocks and data blocks from dmu_sync() seldom have the same
allocation size and as a result are usually allocated from different
metaslabs. This means that if a spindle has to write both log blocks and
DMU blocks (which is likely to happen under heavy load), it will have to
seek between the two. Allocating the log blocks from the slog pool
instead of the main pool avoids these unnecessary seeks.
This commit makes ZFS use the slog on datasets with logbias=throughput.
Real-life performance testing shows a 50% synchronous write performance
increase with some large commit sizes, and no negative effect in other
cases.
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #1013
Diffstat (limited to 'module')
-rw-r--r-- | module/zfs/zil.c | 13 |
1 files changed, 6 insertions, 7 deletions
diff --git a/module/zfs/zil.c b/module/zfs/zil.c index 6492dbc1c..220f2d79e 100644 --- a/module/zfs/zil.c +++ b/module/zfs/zil.c @@ -520,7 +520,7 @@ zil_create(zilog_t *zilog) } error = zio_alloc_zil(zilog->zl_spa, txg, &blk, - ZIL_MIN_BLKSZ, zilog->zl_logbias == ZFS_LOGBIAS_LATENCY); + ZIL_MIN_BLKSZ, B_TRUE); fastwrite = TRUE; if (error == 0) @@ -895,14 +895,13 @@ uint64_t zil_block_buckets[] = { }; /* - * Use the slog as long as the logbias is 'latency' and the current commit size - * is less than the limit or the total list size is less than 2X the limit. - * Limit checking is disabled by setting zil_slog_limit to UINT64_MAX. + * Use the slog as long as the current commit size is less than the + * limit or the total list size is less than 2X the limit. Limit + * checking is disabled by setting zil_slog_limit to UINT64_MAX. */ unsigned long zil_slog_limit = 1024 * 1024; -#define USE_SLOG(zilog) (((zilog)->zl_logbias == ZFS_LOGBIAS_LATENCY) && \ - (((zilog)->zl_cur_used < zil_slog_limit) || \ - ((zilog)->zl_itx_list_sz < (zil_slog_limit << 1)))) +#define USE_SLOG(zilog) (((zilog)->zl_cur_used < zil_slog_limit) || \ + ((zilog)->zl_itx_list_sz < (zil_slog_limit << 1))) /* * Start a log block write and advance to the next log block. |