summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPrakash Surya <[email protected]>2017-12-05 09:39:16 -0800
committerBrian Behlendorf <[email protected]>2017-12-05 09:39:16 -0800
commit1ce23dcaff6c3d777cb0d9a4a2cf02b43f777d78 (patch)
tree9716b6ef9c90b7060408198cc7eacc1cb2573a98
parent7b3407003fde9eb78ea8ce5ce9165cef7e4795f3 (diff)
OpenZFS 8585 - improve batching done in zil_commit()
Authored by: Prakash Surya <[email protected]> Reviewed by: Brad Lewis <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Approved by: Dan McDonald <[email protected]> Ported-by: Prakash Surya <[email protected]> Problem ======= The current implementation of zil_commit() can introduce significant latency, beyond what is inherent due to the latency of the underlying storage. The additional latency comes from two main problems: 1. When there's outstanding ZIL blocks being written (i.e. there's already a "writer thread" in progress), then any new calls to zil_commit() will block waiting for the currently oustanding ZIL blocks to complete. The blocks written for each "writer thread" is coined a "batch", and there can only ever be a single "batch" being written at a time. When a batch is being written, any new ZIL transactions will have to wait for the next batch to be written, which won't occur until the current batch finishes. As a result, the underlying storage may not be used as efficiently as possible. While "new" threads enter zil_commit() and are blocked waiting for the next batch, it's possible that the underlying storage isn't fully utilized by the current batch of ZIL blocks. In that case, it'd be better to allow these new threads to generate (and issue) a new ZIL block, such that it could be serviced by the underlying storage concurrently with the other ZIL blocks that are being serviced. 2. Any call to zil_commit() must wait for all ZIL blocks in its "batch" to complete, prior to zil_commit() returning. The size of any given batch is proportional to the number of ZIL transaction in the queue at the time that the batch starts processing the queue; which doesn't occur until the previous batch completes. Thus, if there's a lot of transactions in the queue, the batch could be composed of many ZIL blocks, and each call to zil_commit() will have to wait for all of these writes to complete (even if the thread calling zil_commit() only cared about one of the transactions in the batch). To further complicate the situation, these two issues result in the following side effect: 3. If a given batch takes longer to complete than normal, this results in larger batch sizes, which then take longer to complete and further drive up the latency of zil_commit(). This can occur for a number of reasons, including (but not limited to): transient changes in the workload, and storage latency irregularites. Solution ======== The solution attempted by this change has the following goals: 1. no on-disk changes; maintain current on-disk format. 2. modify the "batch size" to be equal to the "ZIL block size". 3. allow new batches to be generated and issued to disk, while there's already batches being serviced by the disk. 4. allow zil_commit() to wait for as few ZIL blocks as possible. 5. use as few ZIL blocks as possible, for the same amount of ZIL transactions, without introducing significant latency to any individual ZIL transaction. i.e. use fewer, but larger, ZIL blocks. In theory, with these goals met, the new allgorithm will allow the following improvements: 1. new ZIL blocks can be generated and issued, while there's already oustanding ZIL blocks being serviced by the storage. 2. the latency of zil_commit() should be proportional to the underlying storage latency, rather than the incoming synchronous workload. Porting Notes ============= Due to the changes made in commit 119a394ab0, the lifetime of an itx structure differs than in OpenZFS. Specifically, the itx structure is kept around until the data associated with the itx is considered to be safe on disk; this is so that the itx's callback can be called after the data is committed to stable storage. Since OpenZFS doesn't have this itx callback mechanism, it's able to destroy the itx structure immediately after the itx is committed to an lwb (before the lwb is written to disk). To support this difference, and to ensure the itx's callbacks can still be called after the itx's data is on disk, a few changes had to be made: * A list of itxs was added to the lwb structure. This list contains all of the itxs that have been committed to the lwb, such that the callbacks for these itxs can be called from zil_lwb_flush_vdevs_done(), after the data for the itxs is committed to disk. * A list of itxs was added on the stack of the zil_process_commit_list() function; the "nolwb_itxs" list. In some circumstances, an itx may not be committed to an lwb (e.g. if allocating the "next" ZIL block on disk fails), so this list is used to keep track of which itxs fall into this state, such that their callbacks can be called after the ZIL's writer pipeline is "stalled". * The logic to actually call the itx's callback was moved into the zil_itx_destroy() function. Since all consumers of zil_itx_destroy() were effectively performing the same logic (i.e. if callback is non-null, call the callback), it seemed like useful code cleanup to consolidate this logic into a single function. Additionally, the existing Linux tracepoint infrastructure dealing with the ZIL's probes and structures had to be updated to reflect these code changes. Specifically: * The "zil__cw1" and "zil__cw2" probes were removed, so they had to be removed from "trace_zil.h" as well. * Some of the zilog structure's fields were removed, which affected the tracepoint definitions of the structure. * New tracepoints had to be added for the following 3 new probes: * zil__process__commit__itx * zil__process__normal__itx * zil__commit__io__error OpenZFS-issue: https://www.illumos.org/issues/8585 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/5d95a3a Closes #6566
-rw-r--r--cmd/ztest/ztest.c11
-rw-r--r--include/sys/dmu.h2
-rw-r--r--include/sys/trace_zil.h234
-rw-r--r--include/sys/zil.h8
-rw-r--r--include/sys/zil_impl.h73
-rw-r--r--include/sys/zio.h1
-rw-r--r--lib/libspl/include/assert.h3
-rw-r--r--lib/libspl/include/sys/time.h14
-rw-r--r--module/zfs/dmu.c7
-rw-r--r--module/zfs/txg.c12
-rw-r--r--module/zfs/zfs_vnops.c12
-rw-r--r--module/zfs/zil.c1542
-rw-r--r--module/zfs/zio.c33
-rw-r--r--module/zfs/zvol.c11
-rwxr-xr-xscripts/zloop.sh2
15 files changed, 1587 insertions, 378 deletions
diff --git a/cmd/ztest/ztest.c b/cmd/ztest/ztest.c
index e1b1ae3ff..46dd903f8 100644
--- a/cmd/ztest/ztest.c
+++ b/cmd/ztest/ztest.c
@@ -2144,14 +2144,15 @@ ztest_get_done(zgd_t *zgd, int error)
ztest_object_unlock(zd, object);
if (error == 0 && zgd->zgd_bp)
- zil_add_block(zgd->zgd_zilog, zgd->zgd_bp);
+ zil_lwb_add_block(zgd->zgd_lwb, zgd->zgd_bp);
umem_free(zgd, sizeof (*zgd));
umem_free(zzp, sizeof (*zzp));
}
static int
-ztest_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
+ztest_get_data(void *arg, lr_write_t *lr, char *buf, struct lwb *lwb,
+ zio_t *zio)
{
ztest_ds_t *zd = arg;
objset_t *os = zd->zd_os;
@@ -2166,6 +2167,10 @@ ztest_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
int error;
ztest_zgd_private_t *zgd_private;
+ ASSERT3P(lwb, !=, NULL);
+ ASSERT3P(zio, !=, NULL);
+ ASSERT3U(size, !=, 0);
+
ztest_object_lock(zd, object, RL_READER);
error = dmu_bonus_hold(os, object, FTAG, &db);
if (error) {
@@ -2186,7 +2191,7 @@ ztest_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
db = NULL;
zgd = umem_zalloc(sizeof (*zgd), UMEM_NOFAIL);
- zgd->zgd_zilog = zd->zd_zilog;
+ zgd->zgd_lwb = lwb;
zgd_private = umem_zalloc(sizeof (ztest_zgd_private_t), UMEM_NOFAIL);
zgd_private->z_zd = zd;
zgd_private->z_object = object;
diff --git a/include/sys/dmu.h b/include/sys/dmu.h
index 8a9291900..f252a9cb6 100644
--- a/include/sys/dmu.h
+++ b/include/sys/dmu.h
@@ -982,7 +982,7 @@ uint64_t dmu_tx_get_txg(dmu_tx_t *tx);
* {zfs,zvol,ztest}_get_done() args
*/
typedef struct zgd {
- struct zilog *zgd_zilog;
+ struct lwb *zgd_lwb;
struct blkptr *zgd_bp;
dmu_buf_t *zgd_db;
struct rl *zgd_rl;
diff --git a/include/sys/trace_zil.h b/include/sys/trace_zil.h
index eaa391270..ff16c8686 100644
--- a/include/sys/trace_zil.h
+++ b/include/sys/trace_zil.h
@@ -33,89 +33,181 @@
#include <linux/tracepoint.h>
#include <sys/types.h>
+#define ZILOG_TP_STRUCT_ENTRY \
+ __field(uint64_t, zl_lr_seq) \
+ __field(uint64_t, zl_commit_lr_seq) \
+ __field(uint64_t, zl_destroy_txg) \
+ __field(uint64_t, zl_replaying_seq) \
+ __field(uint32_t, zl_suspend) \
+ __field(uint8_t, zl_suspending) \
+ __field(uint8_t, zl_keep_first) \
+ __field(uint8_t, zl_replay) \
+ __field(uint8_t, zl_stop_sync) \
+ __field(uint8_t, zl_logbias) \
+ __field(uint8_t, zl_sync) \
+ __field(int, zl_parse_error) \
+ __field(uint64_t, zl_parse_blk_seq) \
+ __field(uint64_t, zl_parse_lr_seq) \
+ __field(uint64_t, zl_parse_blk_count) \
+ __field(uint64_t, zl_parse_lr_count) \
+ __field(uint64_t, zl_cur_used) \
+ __field(clock_t, zl_replay_time) \
+ __field(uint64_t, zl_replay_blks)
+
+#define ZILOG_TP_FAST_ASSIGN \
+ __entry->zl_lr_seq = zilog->zl_lr_seq; \
+ __entry->zl_commit_lr_seq = zilog->zl_commit_lr_seq; \
+ __entry->zl_destroy_txg = zilog->zl_destroy_txg; \
+ __entry->zl_replaying_seq = zilog->zl_replaying_seq; \
+ __entry->zl_suspend = zilog->zl_suspend; \
+ __entry->zl_suspending = zilog->zl_suspending; \
+ __entry->zl_keep_first = zilog->zl_keep_first; \
+ __entry->zl_replay = zilog->zl_replay; \
+ __entry->zl_stop_sync = zilog->zl_stop_sync; \
+ __entry->zl_logbias = zilog->zl_logbias; \
+ __entry->zl_sync = zilog->zl_sync; \
+ __entry->zl_parse_error = zilog->zl_parse_error; \
+ __entry->zl_parse_blk_seq = zilog->zl_parse_blk_seq; \
+ __entry->zl_parse_lr_seq = zilog->zl_parse_lr_seq; \
+ __entry->zl_parse_blk_count = zilog->zl_parse_blk_count;\
+ __entry->zl_parse_lr_count = zilog->zl_parse_lr_count; \
+ __entry->zl_cur_used = zilog->zl_cur_used; \
+ __entry->zl_replay_time = zilog->zl_replay_time; \
+ __entry->zl_replay_blks = zilog->zl_replay_blks;
+
+#define ZILOG_TP_PRINTK_FMT \
+ "zl { lr_seq %llu commit_lr_seq %llu destroy_txg %llu " \
+ "replaying_seq %llu suspend %u suspending %u keep_first %u " \
+ "replay %u stop_sync %u logbias %u sync %u " \
+ "parse_error %u parse_blk_seq %llu parse_lr_seq %llu " \
+ "parse_blk_count %llu parse_lr_count %llu " \
+ "cur_used %llu replay_time %lu replay_blks %llu }"
+
+#define ZILOG_TP_PRINTK_ARGS \
+ __entry->zl_lr_seq, __entry->zl_commit_lr_seq, \
+ __entry->zl_destroy_txg, __entry->zl_replaying_seq, \
+ __entry->zl_suspend, __entry->zl_suspending, \
+ __entry->zl_keep_first, __entry->zl_replay, \
+ __entry->zl_stop_sync, __entry->zl_logbias, __entry->zl_sync, \
+ __entry->zl_parse_error, __entry->zl_parse_blk_seq, \
+ __entry->zl_parse_lr_seq, __entry->zl_parse_blk_count, \
+ __entry->zl_parse_lr_count, __entry->zl_cur_used, \
+ __entry->zl_replay_time, __entry->zl_replay_blks
+
+#define ITX_TP_STRUCT_ENTRY \
+ __field(itx_wr_state_t, itx_wr_state) \
+ __field(uint8_t, itx_sync) \
+ __field(zil_callback_t, itx_callback) \
+ __field(void *, itx_callback_data) \
+ __field(uint64_t, itx_oid) \
+ \
+ __field(uint64_t, lrc_txtype) \
+ __field(uint64_t, lrc_reclen) \
+ __field(uint64_t, lrc_txg) \
+ __field(uint64_t, lrc_seq)
+
+#define ITX_TP_FAST_ASSIGN \
+ __entry->itx_wr_state = itx->itx_wr_state; \
+ __entry->itx_sync = itx->itx_sync; \
+ __entry->itx_callback = itx->itx_callback; \
+ __entry->itx_callback_data = itx->itx_callback_data; \
+ __entry->itx_oid = itx->itx_oid; \
+ \
+ __entry->lrc_txtype = itx->itx_lr.lrc_txtype; \
+ __entry->lrc_reclen = itx->itx_lr.lrc_reclen; \
+ __entry->lrc_txg = itx->itx_lr.lrc_txg; \
+ __entry->lrc_seq = itx->itx_lr.lrc_seq;
+
+#define ITX_TP_PRINTK_FMT \
+ "itx { wr_state %u sync %u callback %p callback_data %p oid %llu" \
+ " { txtype %llu reclen %llu txg %llu seq %llu } }"
+
+#define ITX_TP_PRINTK_ARGS \
+ __entry->itx_wr_state, __entry->itx_sync, __entry->itx_callback,\
+ __entry->itx_callback_data, __entry->itx_oid, \
+ __entry->lrc_txtype, __entry->lrc_reclen, __entry->lrc_txg, \
+ __entry->lrc_seq
+
+#define ZCW_TP_STRUCT_ENTRY \
+ __field(lwb_t *, zcw_lwb) \
+ __field(boolean_t, zcw_done) \
+ __field(int, zcw_zio_error) \
+
+#define ZCW_TP_FAST_ASSIGN \
+ __entry->zcw_lwb = zcw->zcw_lwb; \
+ __entry->zcw_done = zcw->zcw_done; \
+ __entry->zcw_zio_error = zcw->zcw_zio_error;
+
+#define ZCW_TP_PRINTK_FMT \
+ "zcw { lwb %p done %u error %u }"
+
+#define ZCW_TP_PRINTK_ARGS \
+ __entry->zcw_lwb, __entry->zcw_done, __entry->zcw_zio_error
+
/*
- * Generic support for one argument tracepoints of the form:
+ * Generic support for two argument tracepoints of the form:
*
- * DTRACE_PROBE1(...,
- * zilog_t *, ...);
+ * DTRACE_PROBE2(...,
+ * zilog_t *, ...,
+ * itx_t *, ...);
*/
/* BEGIN CSTYLED */
-DECLARE_EVENT_CLASS(zfs_zil_class,
- TP_PROTO(zilog_t *zilog),
- TP_ARGS(zilog),
+DECLARE_EVENT_CLASS(zfs_zil_process_itx_class,
+ TP_PROTO(zilog_t *zilog, itx_t *itx),
+ TP_ARGS(zilog, itx),
TP_STRUCT__entry(
- __field(uint64_t, zl_lr_seq)
- __field(uint64_t, zl_commit_lr_seq)
- __field(uint64_t, zl_destroy_txg)
- __field(uint64_t, zl_replaying_seq)
- __field(uint32_t, zl_suspend)
- __field(uint8_t, zl_suspending)
- __field(uint8_t, zl_keep_first)
- __field(uint8_t, zl_replay)
- __field(uint8_t, zl_stop_sync)
- __field(uint8_t, zl_writer)
- __field(uint8_t, zl_logbias)
- __field(uint8_t, zl_sync)
- __field(int, zl_parse_error)
- __field(uint64_t, zl_parse_blk_seq)
- __field(uint64_t, zl_parse_lr_seq)
- __field(uint64_t, zl_parse_blk_count)
- __field(uint64_t, zl_parse_lr_count)
- __field(uint64_t, zl_next_batch)
- __field(uint64_t, zl_com_batch)
- __field(uint64_t, zl_cur_used)
- __field(clock_t, zl_replay_time)
- __field(uint64_t, zl_replay_blks)
+ ZILOG_TP_STRUCT_ENTRY
+ ITX_TP_STRUCT_ENTRY
),
TP_fast_assign(
- __entry->zl_lr_seq = zilog->zl_lr_seq;
- __entry->zl_commit_lr_seq = zilog->zl_commit_lr_seq;
- __entry->zl_destroy_txg = zilog->zl_destroy_txg;
- __entry->zl_replaying_seq = zilog->zl_replaying_seq;
- __entry->zl_suspend = zilog->zl_suspend;
- __entry->zl_suspending = zilog->zl_suspending;
- __entry->zl_keep_first = zilog->zl_keep_first;
- __entry->zl_replay = zilog->zl_replay;
- __entry->zl_stop_sync = zilog->zl_stop_sync;
- __entry->zl_writer = zilog->zl_writer;
- __entry->zl_logbias = zilog->zl_logbias;
- __entry->zl_sync = zilog->zl_sync;
- __entry->zl_parse_error = zilog->zl_parse_error;
- __entry->zl_parse_blk_seq = zilog->zl_parse_blk_seq;
- __entry->zl_parse_lr_seq = zilog->zl_parse_lr_seq;
- __entry->zl_parse_blk_count = zilog->zl_parse_blk_count;
- __entry->zl_parse_lr_count = zilog->zl_parse_lr_count;
- __entry->zl_next_batch = zilog->zl_next_batch;
- __entry->zl_com_batch = zilog->zl_com_batch;
- __entry->zl_cur_used = zilog->zl_cur_used;
- __entry->zl_replay_time = zilog->zl_replay_time;
- __entry->zl_replay_blks = zilog->zl_replay_blks;
+ ZILOG_TP_FAST_ASSIGN
+ ITX_TP_FAST_ASSIGN
),
- TP_printk("zl { lr_seq %llu commit_lr_seq %llu destroy_txg %llu "
- "replaying_seq %llu suspend %u suspending %u keep_first %u "
- "replay %u stop_sync %u writer %u logbias %u sync %u "
- "parse_error %u parse_blk_seq %llu parse_lr_seq %llu "
- "parse_blk_count %llu parse_lr_count %llu next_batch %llu "
- "com_batch %llu cur_used %llu replay_time %lu replay_blks %llu }",
- __entry->zl_lr_seq, __entry->zl_commit_lr_seq,
- __entry->zl_destroy_txg, __entry->zl_replaying_seq,
- __entry->zl_suspend, __entry->zl_suspending, __entry->zl_keep_first,
- __entry->zl_replay, __entry->zl_stop_sync, __entry->zl_writer,
- __entry->zl_logbias, __entry->zl_sync, __entry->zl_parse_error,
- __entry->zl_parse_blk_seq, __entry->zl_parse_lr_seq,
- __entry->zl_parse_blk_count, __entry->zl_parse_lr_count,
- __entry->zl_next_batch, __entry->zl_com_batch, __entry->zl_cur_used,
- __entry->zl_replay_time, __entry->zl_replay_blks)
+ TP_printk(
+ ZILOG_TP_PRINTK_FMT " " ITX_TP_PRINTK_FMT,
+ ZILOG_TP_PRINTK_ARGS, ITX_TP_PRINTK_ARGS)
);
/* END CSTYLED */
/* BEGIN CSTYLED */
-#define DEFINE_ZIL_EVENT(name) \
-DEFINE_EVENT(zfs_zil_class, name, \
- TP_PROTO(zilog_t *zilog), \
- TP_ARGS(zilog))
-DEFINE_ZIL_EVENT(zfs_zil__cw1);
-DEFINE_ZIL_EVENT(zfs_zil__cw2);
+#define DEFINE_ZIL_PROCESS_ITX_EVENT(name) \
+DEFINE_EVENT(zfs_zil_process_itx_class, name, \
+ TP_PROTO(zilog_t *zilog, itx_t *itx), \
+ TP_ARGS(zilog, itx))
+DEFINE_ZIL_PROCESS_ITX_EVENT(zfs_zil__process__commit__itx);
+DEFINE_ZIL_PROCESS_ITX_EVENT(zfs_zil__process__normal__itx);
+/* END CSTYLED */
+
+/*
+ * Generic support for two argument tracepoints of the form:
+ *
+ * DTRACE_PROBE2(...,
+ * zilog_t *, ...,
+ * zil_commit_waiter_t *, ...);
+ */
+/* BEGIN CSTYLED */
+DECLARE_EVENT_CLASS(zfs_zil_commit_io_error_class,
+ TP_PROTO(zilog_t *zilog, zil_commit_waiter_t *zcw),
+ TP_ARGS(zilog, zcw),
+ TP_STRUCT__entry(
+ ZILOG_TP_STRUCT_ENTRY
+ ZCW_TP_STRUCT_ENTRY
+ ),
+ TP_fast_assign(
+ ZILOG_TP_FAST_ASSIGN
+ ZCW_TP_FAST_ASSIGN
+ ),
+ TP_printk(
+ ZILOG_TP_PRINTK_FMT " " ZCW_TP_PRINTK_FMT,
+ ZILOG_TP_PRINTK_ARGS, ZCW_TP_PRINTK_ARGS)
+);
+
+/* BEGIN CSTYLED */
+#define DEFINE_ZIL_COMMIT_IO_ERROR_EVENT(name) \
+DEFINE_EVENT(zfs_zil_commit_io_error_class, name, \
+ TP_PROTO(zilog_t *zilog, zil_commit_waiter_t *zcw), \
+ TP_ARGS(zilog, zcw))
+DEFINE_ZIL_COMMIT_IO_ERROR_EVENT(zfs_zil__commit__io__error);
/* END CSTYLED */
#endif /* _TRACE_ZIL_H */
diff --git a/include/sys/zil.h b/include/sys/zil.h
index 442fee4a1..5513c7ce8 100644
--- a/include/sys/zil.h
+++ b/include/sys/zil.h
@@ -40,6 +40,7 @@ extern "C" {
struct dsl_pool;
struct dsl_dataset;
+struct lwb;
/*
* Intent log format:
@@ -140,6 +141,7 @@ typedef enum zil_create {
/*
* Intent log transaction types and record structures
*/
+#define TX_COMMIT 0 /* Commit marker (no on-disk state) */
#define TX_CREATE 1 /* Create file */
#define TX_MKDIR 2 /* Make directory */
#define TX_MKXATTR 3 /* Make XATTR directory */
@@ -465,7 +467,8 @@ typedef int zil_parse_blk_func_t(zilog_t *zilog, blkptr_t *bp, void *arg,
typedef int zil_parse_lr_func_t(zilog_t *zilog, lr_t *lr, void *arg,
uint64_t txg);
typedef int zil_replay_func_t(void *arg1, void *arg2, boolean_t byteswap);
-typedef int zil_get_data_t(void *arg, lr_write_t *lr, char *dbuf, zio_t *zio);
+typedef int zil_get_data_t(void *arg, lr_write_t *lr, char *dbuf,
+ struct lwb *lwb, zio_t *zio);
extern int zil_parse(zilog_t *zilog, zil_parse_blk_func_t *parse_blk_func,
zil_parse_lr_func_t *parse_lr_func, void *arg, uint64_t txg,
@@ -503,7 +506,8 @@ extern void zil_clean(zilog_t *zilog, uint64_t synced_txg);
extern int zil_suspend(const char *osname, void **cookiep);
extern void zil_resume(void *cookie);
-extern void zil_add_block(zilog_t *zilog, const blkptr_t *bp);
+extern void zil_lwb_add_block(struct lwb *lwb, const blkptr_t *bp);
+extern void zil_lwb_add_txg(struct lwb *lwb, uint64_t txg);
extern int zil_bp_tree_add(zilog_t *zilog, const blkptr_t *bp);
extern void zil_set_sync(zilog_t *zilog, uint64_t syncval);
diff --git a/include/sys/zil_impl.h b/include/sys/zil_impl.h
index dd5304b79..3f69b3265 100644
--- a/include/sys/zil_impl.h
+++ b/include/sys/zil_impl.h
@@ -20,7 +20,7 @@
*/
/*
* Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2012 by Delphix. All rights reserved.
+ * Copyright (c) 2012, 2017 by Delphix. All rights reserved.
*/
/* Portions Copyright 2010 Robert Milkowski */
@@ -36,7 +36,30 @@ extern "C" {
#endif
/*
- * Log write buffer.
+ * Possbile states for a given lwb structure. An lwb will start out in
+ * the "closed" state, and then transition to the "opened" state via a
+ * call to zil_lwb_write_open(). After the lwb is "open", it can
+ * transition into the "issued" state via zil_lwb_write_issue(). After
+ * the lwb's zio completes, and the vdev's are flushed, the lwb will
+ * transition into the "done" state via zil_lwb_write_done(), and the
+ * structure eventually freed.
+ */
+typedef enum {
+ LWB_STATE_CLOSED,
+ LWB_STATE_OPENED,
+ LWB_STATE_ISSUED,
+ LWB_STATE_DONE,
+ LWB_NUM_STATES
+} lwb_state_t;
+
+/*
+ * Log write block (lwb)
+ *
+ * Prior to an lwb being issued to disk via zil_lwb_write_issue(), it
+ * will be protected by the zilog's "zl_writer_lock". Basically, prior
+ * to it being issued, it will only be accessed by the thread that's
+ * holding the "zl_writer_lock". After the lwb is issued, the zilog's
+ * "zl_lock" is used to protect the lwb against concurrent access.
*/
typedef struct lwb {
zilog_t *lwb_zilog; /* back pointer to log struct */
@@ -45,14 +68,46 @@ typedef struct lwb {
boolean_t lwb_slog; /* lwb_blk is on SLOG device */
int lwb_nused; /* # used bytes in buffer */
int lwb_sz; /* size of block and buffer */
+ lwb_state_t lwb_state; /* the state of this lwb */
char *lwb_buf; /* log write buffer */
- zio_t *lwb_zio; /* zio for this buffer */
+ zio_t *lwb_write_zio; /* zio for the lwb buffer */
+ zio_t *lwb_root_zio; /* root zio for lwb write and flushes */
dmu_tx_t *lwb_tx; /* tx for log block allocation */
uint64_t lwb_max_txg; /* highest txg in this lwb */
list_node_t lwb_node; /* zilog->zl_lwb_list linkage */
+ list_t lwb_itxs; /* list of itx's */
+ list_t lwb_waiters; /* list of zil_commit_waiter's */
+ avl_tree_t lwb_vdev_tree; /* vdevs to flush after lwb write */
+ kmutex_t lwb_vdev_lock; /* protects lwb_vdev_tree */
+ hrtime_t lwb_issued_timestamp; /* when was the lwb issued? */
} lwb_t;
/*
+ * ZIL commit waiter.
+ *
+ * This structure is allocated each time zil_commit() is called, and is
+ * used by zil_commit() to communicate with other parts of the ZIL, such
+ * that zil_commit() can know when it safe for it return. For more
+ * details, see the comment above zil_commit().
+ *
+ * The "zcw_lock" field is used to protect the commit waiter against
+ * concurrent access. This lock is often acquired while already holding
+ * the zilog's "zl_writer_lock" or "zl_lock"; see the functions
+ * zil_process_commit_list() and zil_lwb_flush_vdevs_done() as examples
+ * of this. Thus, one must be careful not to acquire the
+ * "zl_writer_lock" or "zl_lock" when already holding the "zcw_lock";
+ * e.g. see the zil_commit_waiter_timeout() function.
+ */
+typedef struct zil_commit_waiter {
+ kcondvar_t zcw_cv; /* signalled when "done" */
+ kmutex_t zcw_lock; /* protects fields of this struct */
+ list_node_t zcw_node; /* linkage in lwb_t:lwb_waiter list */
+ lwb_t *zcw_lwb; /* back pointer to lwb when linked */
+ boolean_t zcw_done; /* B_TRUE when "done", else B_FALSE */
+ int zcw_zio_error; /* contains the zio io_error value */
+} zil_commit_waiter_t;
+
+/*
* Intent log transaction lists
*/
typedef struct itxs {
@@ -94,20 +149,20 @@ struct zilog {
const zil_header_t *zl_header; /* log header buffer */
objset_t *zl_os; /* object set we're logging */
zil_get_data_t *zl_get_data; /* callback to get object content */
- zio_t *zl_root_zio; /* log writer root zio */
+ lwb_t *zl_last_lwb_opened; /* most recent lwb opened */
+ hrtime_t zl_last_lwb_latency; /* zio latency of last lwb done */
uint64_t zl_lr_seq; /* on-disk log record sequence number */
uint64_t zl_commit_lr_seq; /* last committed on-disk lr seq */
uint64_t zl_destroy_txg; /* txg of last zil_destroy() */
uint64_t zl_replayed_seq[TXG_SIZE]; /* last replayed rec seq */
uint64_t zl_replaying_seq; /* current replay seq number */
uint32_t zl_suspend; /* log suspend count */
- kcondvar_t zl_cv_writer; /* log writer thread completion */
kcondvar_t zl_cv_suspend; /* log suspend completion */
uint8_t zl_suspending; /* log is currently suspending */
uint8_t zl_keep_first; /* keep first log block in destroy */
uint8_t zl_replay; /* replaying records while set */
uint8_t zl_stop_sync; /* for debugging */
- uint8_t zl_writer; /* boolean: write setup in progress */
+ kmutex_t zl_writer_lock; /* single writer, per ZIL, at a time */
uint8_t zl_logbias; /* latency or throughput */
uint8_t zl_sync; /* synchronous or asynchronous */
int zl_parse_error; /* last zil_parse() error */
@@ -115,15 +170,10 @@ struct zilog {
uint64_t zl_parse_lr_seq; /* highest lr seq on last parse */
uint64_t zl_parse_blk_count; /* number of blocks parsed */
uint64_t zl_parse_lr_count; /* number of log records parsed */
- uint64_t zl_next_batch; /* next batch number */
- uint64_t zl_com_batch; /* committed batch number */
- kcondvar_t zl_cv_batch[2]; /* batch condition variables */
itxg_t zl_itxg[TXG_SIZE]; /* intent log txg chains */
list_t zl_itx_commit_list; /* itx list to be committed */
uint64_t zl_cur_used; /* current commit log size used */
list_t zl_lwb_list; /* in-flight log write list */
- kmutex_t zl_vdev_lock; /* protects zl_vdev_tree */
- avl_tree_t zl_vdev_tree; /* vdevs to flush in zil_commit() */
avl_tree_t zl_bp_tree; /* track bps during log parse */
clock_t zl_replay_time; /* lbolt of when replay started */
uint64_t zl_replay_blks; /* number of log blocks replayed */
@@ -131,6 +181,7 @@ struct zilog {
uint_t zl_prev_blks[ZIL_PREV_BLKS]; /* size - sector rounded */
uint_t zl_prev_rotor; /* rotor for zl_prev[] */
txg_node_t zl_dirty_link; /* protected by dp_dirty_zilogs list */
+ uint64_t zl_dirty_max_txg; /* highest txg used to dirty zilog */
};
typedef struct zil_bp_node {
diff --git a/include/sys/zio.h b/include/sys/zio.h
index 30c4ee3ce..1f9f48268 100644
--- a/include/sys/zio.h
+++ b/include/sys/zio.h
@@ -596,6 +596,7 @@ extern enum zio_checksum zio_checksum_dedup_select(spa_t *spa,
extern enum zio_compress zio_compress_select(spa_t *spa,
enum zio_compress child, enum zio_compress parent);
+extern void zio_cancel(zio_t *zio);
extern void zio_suspend(spa_t *spa, zio_t *zio);
extern int zio_resume(spa_t *spa);
extern void zio_resume_wait(spa_t *spa);
diff --git a/lib/libspl/include/assert.h b/lib/libspl/include/assert.h
index 026d6fef9..c080bc8aa 100644
--- a/lib/libspl/include/assert.h
+++ b/lib/libspl/include/assert.h
@@ -76,6 +76,7 @@ do { \
(u_longlong_t)__left, #OP, (u_longlong_t)__right); \
} while (0)
+#define VERIFY3B(x, y, z) VERIFY3_IMPL(x, y, z, boolean_t)
#define VERIFY3S(x, y, z) VERIFY3_IMPL(x, y, z, int64_t)
#define VERIFY3U(x, y, z) VERIFY3_IMPL(x, y, z, uint64_t)
#define VERIFY3P(x, y, z) VERIFY3_IMPL(x, y, z, uintptr_t)
@@ -94,6 +95,7 @@ do { \
__compile_time_assertion__ ## y[(x) ? 1 : -1]
#ifdef NDEBUG
+#define ASSERT3B(x, y, z) ((void)0)
#define ASSERT3S(x, y, z) ((void)0)
#define ASSERT3U(x, y, z) ((void)0)
#define ASSERT3P(x, y, z) ((void)0)
@@ -104,6 +106,7 @@ do { \
#define IMPLY(A, B) ((void)0)
#define EQUIV(A, B) ((void)0)
#else
+#define ASSERT3B(x, y, z) VERIFY3B(x, y, z)
#define ASSERT3S(x, y, z) VERIFY3S(x, y, z)
#define ASSERT3U(x, y, z) VERIFY3U(x, y, z)
#define ASSERT3P(x, y, z) VERIFY3P(x, y, z)
diff --git a/lib/libspl/include/sys/time.h b/lib/libspl/include/sys/time.h
index f05fcaa1c..b3dd74dd5 100644
--- a/lib/libspl/include/sys/time.h
+++ b/lib/libspl/include/sys/time.h
@@ -51,14 +51,22 @@
#endif
#ifndef MSEC2NSEC
-#define MSEC2NSEC(m) ((hrtime_t)(m) * (NANOSEC / MILLISEC))
+#define MSEC2NSEC(m) ((hrtime_t)(m) * (NANOSEC / MILLISEC))
#endif
#ifndef NSEC2MSEC
-#define NSEC2MSEC(n) ((n) / (NANOSEC / MILLISEC))
+#define NSEC2MSEC(n) ((n) / (NANOSEC / MILLISEC))
#endif
-#ifndef NSEC2SEC
+#ifndef USEC2NSEC
+#define USEC2NSEC(m) ((hrtime_t)(m) * (NANOSEC / MICROSEC))
+#endif
+
+#ifndef NSEC2USEC
+#define NSEC2USEC(n) ((n) / (NANOSEC / MICROSEC))
+#endif
+
+#ifndef NSEC2SEC
#define NSEC2SEC(n) ((n) / (NANOSEC / SEC))
#endif
diff --git a/module/zfs/dmu.c b/module/zfs/dmu.c
index 108dfe157..56740ae37 100644
--- a/module/zfs/dmu.c
+++ b/module/zfs/dmu.c
@@ -1783,6 +1783,13 @@ dmu_sync_late_arrival(zio_t *pio, objset_t *os, dmu_sync_cb_t *done, zgd_t *zgd,
return (SET_ERROR(EIO));
}
+ /*
+ * In order to prevent the zgd's lwb from being free'd prior to
+ * dmu_sync_late_arrival_done() being called, we have to ensure
+ * the lwb's "max txg" takes this tx's txg into account.
+ */
+ zil_lwb_add_txg(zgd->zgd_lwb, dmu_tx_get_txg(tx));
+
dsa = kmem_alloc(sizeof (dmu_sync_arg_t), KM_SLEEP);
dsa->dsa_dr = NULL;
dsa->dsa_done = done;
diff --git a/module/zfs/txg.c b/module/zfs/txg.c
index 078839a32..bf8544507 100644
--- a/module/zfs/txg.c
+++ b/module/zfs/txg.c
@@ -160,7 +160,7 @@ txg_fini(dsl_pool_t *dp)
tx_state_t *tx = &dp->dp_tx;
int c;
- ASSERT(tx->tx_threads == 0);
+ ASSERT0(tx->tx_threads);
mutex_destroy(&tx->tx_sync_lock);
@@ -201,7 +201,7 @@ txg_sync_start(dsl_pool_t *dp)
dprintf("pool %p\n", dp);
- ASSERT(tx->tx_threads == 0);
+ ASSERT0(tx->tx_threads);
tx->tx_threads = 2;
@@ -263,7 +263,7 @@ txg_sync_stop(dsl_pool_t *dp)
/*
* Finish off any work in progress.
*/
- ASSERT(tx->tx_threads == 2);
+ ASSERT3U(tx->tx_threads, ==, 2);
/*
* We need to ensure that we've vacated the deferred space_maps.
@@ -275,7 +275,7 @@ txg_sync_stop(dsl_pool_t *dp)
*/
mutex_enter(&tx->tx_sync_lock);
- ASSERT(tx->tx_threads == 2);
+ ASSERT3U(tx->tx_threads, ==, 2);
tx->tx_exiting = 1;
@@ -648,7 +648,7 @@ txg_wait_synced(dsl_pool_t *dp, uint64_t txg)
ASSERT(!dsl_pool_config_held(dp));
mutex_enter(&tx->tx_sync_lock);
- ASSERT(tx->tx_threads == 2);
+ ASSERT3U(tx->tx_threads, ==, 2);
if (txg == 0)
txg = tx->tx_open_txg + TXG_DEFER_SIZE;
if (tx->tx_sync_txg_waiting < txg)
@@ -673,7 +673,7 @@ txg_wait_open(dsl_pool_t *dp, uint64_t txg)
ASSERT(!dsl_pool_config_held(dp));
mutex_enter(&tx->tx_sync_lock);
- ASSERT(tx->tx_threads == 2);
+ ASSERT3U(tx->tx_threads, ==, 2);
if (txg == 0)
txg = tx->tx_open_txg + 1;
if (tx->tx_quiesce_txg_waiting < txg)
diff --git a/module/zfs/zfs_vnops.c b/module/zfs/zfs_vnops.c
index a60cd6198..977035fd9 100644
--- a/module/zfs/zfs_vnops.c
+++ b/module/zfs/zfs_vnops.c
@@ -78,6 +78,7 @@
#include <sys/cred.h>
#include <sys/attr.h>
#include <sys/zpl.h>
+#include <sys/zil.h>
/*
* Programming rules.
@@ -983,7 +984,7 @@ zfs_get_done(zgd_t *zgd, int error)
zfs_iput_async(ZTOI(zp));
if (error == 0 && zgd->zgd_bp)
- zil_add_block(zgd->zgd_zilog, zgd->zgd_bp);
+ zil_lwb_add_block(zgd->zgd_lwb, zgd->zgd_bp);
kmem_free(zgd, sizeof (zgd_t));
}
@@ -996,7 +997,7 @@ static int zil_fault_io = 0;
* Get data to generate a TX_WRITE intent log record.
*/
int
-zfs_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
+zfs_get_data(void *arg, lr_write_t *lr, char *buf, struct lwb *lwb, zio_t *zio)
{
zfsvfs_t *zfsvfs = arg;
objset_t *os = zfsvfs->z_os;
@@ -1008,8 +1009,9 @@ zfs_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
zgd_t *zgd;
int error = 0;
- ASSERT(zio != NULL);
- ASSERT(size != 0);
+ ASSERT3P(lwb, !=, NULL);
+ ASSERT3P(zio, !=, NULL);
+ ASSERT3U(size, !=, 0);
/*
* Nothing to do if the file has been removed
@@ -1026,7 +1028,7 @@ zfs_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
}
zgd = (zgd_t *)kmem_zalloc(sizeof (zgd_t), KM_SLEEP);
- zgd->zgd_zilog = zfsvfs->z_log;
+ zgd->zgd_lwb = lwb;
zgd->zgd_private = zp;
/*
diff --git a/module/zfs/zil.c b/module/zfs/zil.c
index 9f4312a5a..5300b3408 100644
--- a/module/zfs/zil.c
+++ b/module/zfs/zil.c
@@ -44,32 +44,53 @@
#include <sys/abd.h>
/*
- * The zfs intent log (ZIL) saves transaction records of system calls
- * that change the file system in memory with enough information
- * to be able to replay them. These are stored in memory until
- * either the DMU transaction group (txg) commits them to the stable pool
- * and they can be discarded, or they are flushed to the stable log
- * (also in the pool) due to a fsync, O_DSYNC or other synchronous
- * requirement. In the event of a panic or power fail then those log
- * records (transactions) are replayed.
+ * The ZFS Intent Log (ZIL) saves "transaction records" (itxs) of system
+ * calls that change the file system. Each itx has enough information to
+ * be able to replay them after a system crash, power loss, or
+ * equivalent failure mode. These are stored in memory until either:
*
- * There is one ZIL per file system. Its on-disk (pool) format consists
- * of 3 parts:
+ * 1. they are committed to the pool by the DMU transaction group
+ * (txg), at which point they can be discarded; or
+ * 2. they are committed to the on-disk ZIL for the dataset being
+ * modified (e.g. due to an fsync, O_DSYNC, or other synchronous
+ * requirement).
*
- * - ZIL header
- * - ZIL blocks
- * - ZIL records
+ * In the event of a crash or power loss, the itxs contained by each
+ * dataset's on-disk ZIL will be replayed when that dataset is first
+ * instantianted (e.g. if the dataset is a normal fileystem, when it is
+ * first mounted).
*
- * A log record holds a system call transaction. Log blocks can
- * hold many log records and the blocks are chained together.
- * Each ZIL block contains a block pointer (blkptr_t) to the next
- * ZIL block in the chain. The ZIL header points to the first
- * block in the chain. Note there is not a fixed place in the pool
- * to hold blocks. They are dynamically allocated and freed as
- * needed from the blocks available. Figure X shows the ZIL structure:
+ * As hinted at above, there is one ZIL per dataset (both the in-memory
+ * representation, and the on-disk representation). The on-disk format
+ * consists of 3 parts:
+ *
+ * - a single, per-dataset, ZIL header; which points to a chain of
+ * - zero or more ZIL blocks; each of which contains
+ * - zero or more ZIL records
+ *
+ * A ZIL record holds the information necessary to replay a single
+ * system call transaction. A ZIL block can hold many ZIL records, and
+ * the blocks are chained together, similarly to a singly linked list.
+ *
+ * Each ZIL block contains a block pointer (blkptr_t) to the next ZIL
+ * block in the chain, and the ZIL header points to the first block in
+ * the chain.
+ *
+ * Note, there is not a fixed place in the pool to hold these ZIL
+ * blocks; they are dynamically allocated and freed as needed from the
+ * blocks available on the pool, though they can be preferentially
+ * allocated from a dedicated "log" vdev.
*/
/*
+ * This controls the amount of time that a ZIL block (lwb) will remain
+ * "open" when it isn't "full", and it has a thread waiting for it to be
+ * committed to stable storage. Please refer to the zil_commit_waiter()
+ * function (and the comments within it) for more details.
+ */
+int zfs_commit_timeout_pct = 5;
+
+/*
* See zil.h for more information about these fields.
*/
zil_stats_t zil_stats = {
@@ -110,6 +131,7 @@ int zfs_nocacheflush = 0;
unsigned long zil_slog_bulk = 768 * 1024;
static kmem_cache_t *zil_lwb_cache;
+static kmem_cache_t *zil_zcw_cache;
static void zil_async_to_sync(zilog_t *zilog, uint64_t foid);
@@ -476,6 +498,15 @@ zil_free_log_record(zilog_t *zilog, lr_t *lrc, void *tx, uint64_t claim_txg)
return (0);
}
+static int
+zil_lwb_vdev_compare(const void *x1, const void *x2)
+{
+ const uint64_t v1 = ((zil_vdev_node_t *)x1)->zv_vdev;
+ const uint64_t v2 = ((zil_vdev_node_t *)x2)->zv_vdev;
+
+ return (AVL_CMP(v1, v2));
+}
+
static lwb_t *
zil_alloc_lwb(zilog_t *zilog, blkptr_t *bp, boolean_t slog, uint64_t txg,
boolean_t fastwrite)
@@ -487,10 +518,13 @@ zil_alloc_lwb(zilog_t *zilog, blkptr_t *bp, boolean_t slog, uint64_t txg,
lwb->lwb_blk = *bp;
lwb->lwb_fastwrite = fastwrite;
lwb->lwb_slog = slog;
+ lwb->lwb_state = LWB_STATE_CLOSED;
lwb->lwb_buf = zio_buf_alloc(BP_GET_LSIZE(bp));
lwb->lwb_max_txg = txg;
- lwb->lwb_zio = NULL;
+ lwb->lwb_write_zio = NULL;
+ lwb->lwb_root_zio = NULL;
lwb->lwb_tx = NULL;
+ lwb->lwb_issued_timestamp = 0;
if (BP_GET_CHECKSUM(bp) == ZIO_CHECKSUM_ZILOG2) {
lwb->lwb_nused = sizeof (zil_chain_t);
lwb->lwb_sz = BP_GET_LSIZE(bp);
@@ -503,9 +537,64 @@ zil_alloc_lwb(zilog_t *zilog, blkptr_t *bp, boolean_t slog, uint64_t txg,
list_insert_tail(&zilog->zl_lwb_list, lwb);
mutex_exit(&zilog->zl_lock);
+ ASSERT(!MUTEX_HELD(&lwb->lwb_vdev_lock));
+ ASSERT(avl_is_empty(&lwb->lwb_vdev_tree));
+ ASSERT(list_is_empty(&lwb->lwb_waiters));
+ ASSERT(list_is_empty(&lwb->lwb_itxs));
+
return (lwb);
}
+static void
+zil_free_lwb(zilog_t *zilog, lwb_t *lwb)
+{
+ ASSERT(MUTEX_HELD(&zilog->zl_lock));
+ ASSERT(!MUTEX_HELD(&lwb->lwb_vdev_lock));
+ ASSERT(list_is_empty(&lwb->lwb_waiters));
+
+ if (lwb->lwb_state == LWB_STATE_OPENED) {
+ avl_tree_t *t = &lwb->lwb_vdev_tree;
+ void *cookie = NULL;
+ zil_vdev_node_t *zv;
+ itx_t *itx;
+
+ while ((zv = avl_destroy_nodes(t, &cookie)) != NULL)
+ kmem_free(zv, sizeof (*zv));
+
+ while ((itx = list_head(&lwb->lwb_itxs)) != NULL) {
+ if (itx->itx_callback != NULL)
+ itx->itx_callback(itx->itx_callback_data);
+ list_remove(&lwb->lwb_itxs, itx);
+ zil_itx_destroy(itx);
+ }
+
+ ASSERT3P(lwb->lwb_root_zio, !=, NULL);
+ ASSERT3P(lwb->lwb_write_zio, !=, NULL);
+
+ zio_cancel(lwb->lwb_root_zio);
+ zio_cancel(lwb->lwb_write_zio);
+
+ lwb->lwb_root_zio = NULL;
+ lwb->lwb_write_zio = NULL;
+ } else {
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_ISSUED);
+ }
+
+ ASSERT(avl_is_empty(&lwb->lwb_vdev_tree));
+ ASSERT(list_is_empty(&lwb->lwb_itxs));
+ ASSERT3P(lwb->lwb_write_zio, ==, NULL);
+ ASSERT3P(lwb->lwb_root_zio, ==, NULL);
+
+ /*
+ * Clear the zilog's field to indicate this lwb is no longer
+ * valid, and prevent use-after-free errors.
+ */
+ if (zilog->zl_last_lwb_opened == lwb)
+ zilog->zl_last_lwb_opened = NULL;
+
+ kmem_cache_free(zil_lwb_cache, lwb);
+}
+
/*
* Called when we create in-memory log transactions so that we know
* to cleanup the itxs at the end of spa_sync().
@@ -516,12 +605,16 @@ zilog_dirty(zilog_t *zilog, uint64_t txg)
dsl_pool_t *dp = zilog->zl_dmu_pool;
dsl_dataset_t *ds = dmu_objset_ds(zilog->zl_os);
+ ASSERT(spa_writeable(zilog->zl_spa));
+
if (ds->ds_is_snapshot)
panic("dirtying snapshot!");
if (txg_list_add(&dp->dp_dirty_zilogs, zilog, txg)) {
/* up the hold count until we can be written out */
dmu_buf_add_ref(ds->ds_dbuf, zilog);
+
+ zilog->zl_dirty_max_txg = MAX(txg, zilog->zl_dirty_max_txg);
}
}
@@ -590,7 +683,7 @@ zil_create(zilog_t *zilog)
*/
if (BP_IS_HOLE(&blk) || BP_SHOULD_BYTESWAP(&blk)) {
tx = dmu_tx_create(zilog->zl_os);
- VERIFY(dmu_tx_assign(tx, TXG_WAIT) == 0);
+ VERIFY0(dmu_tx_assign(tx, TXG_WAIT));
dsl_dataset_dirty(dmu_objset_ds(zilog->zl_os), tx);
txg = dmu_tx_get_txg(tx);
@@ -608,7 +701,7 @@ zil_create(zilog_t *zilog)
}
/*
- * Allocate a log write buffer (lwb) for the first log block.
+ * Allocate a log write block (lwb) for the first log block.
*/
if (error == 0)
lwb = zil_alloc_lwb(zilog, &blk, slog, txg, fastwrite);
@@ -629,13 +722,13 @@ zil_create(zilog_t *zilog)
}
/*
- * In one tx, free all log blocks and clear the log header.
- * If keep_first is set, then we're replaying a log with no content.
- * We want to keep the first block, however, so that the first
- * synchronous transaction doesn't require a txg_wait_synced()
- * in zil_create(). We don't need to txg_wait_synced() here either
- * when keep_first is set, because both zil_create() and zil_destroy()
- * will wait for any in-progress destroys to complete.
+ * In one tx, free all log blocks and clear the log header. If keep_first
+ * is set, then we're replaying a log with no content. We want to keep the
+ * first block, however, so that the first synchronous transaction doesn't
+ * require a txg_wait_synced() in zil_create(). We don't need to
+ * txg_wait_synced() here either when keep_first is set, because both
+ * zil_create() and zil_destroy() will wait for any in-progress destroys
+ * to complete.
*/
void
zil_destroy(zilog_t *zilog, boolean_t keep_first)
@@ -656,7 +749,7 @@ zil_destroy(zilog_t *zilog, boolean_t keep_first)
return;
tx = dmu_tx_create(zilog->zl_os);
- VERIFY(dmu_tx_assign(tx, TXG_WAIT) == 0);
+ VERIFY0(dmu_tx_assign(tx, TXG_WAIT));
dsl_dataset_dirty(dmu_objset_ds(zilog->zl_os), tx);
txg = dmu_tx_get_txg(tx);
@@ -670,15 +763,15 @@ zil_destroy(zilog_t *zilog, boolean_t keep_first)
ASSERT(zh->zh_claim_txg == 0);
VERIFY(!keep_first);
while ((lwb = list_head(&zilog->zl_lwb_list)) != NULL) {
- ASSERT(lwb->lwb_zio == NULL);
if (lwb->lwb_fastwrite)
metaslab_fastwrite_unmark(zilog->zl_spa,
&lwb->lwb_blk);
+
list_remove(&zilog->zl_lwb_list, lwb);
if (lwb->lwb_buf != NULL)
zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
- zio_free_zil(zilog->zl_spa, txg, &lwb->lwb_blk);
- kmem_cache_free(zil_lwb_cache, lwb);
+ zio_free(zilog->zl_spa, txg, &lwb->lwb_blk);
+ zil_free_lwb(zilog, lwb);
}
} else if (!keep_first) {
zil_destroy_sync(zilog, tx);
@@ -820,19 +913,64 @@ zil_check_log_chain(dsl_pool_t *dp, dsl_dataset_t *ds, void *tx)
return ((error == ECKSUM || error == ENOENT) ? 0 : error);
}
-static int
-zil_vdev_compare(const void *x1, const void *x2)
+/*
+ * When an itx is "skipped", this function is used to properly mark the
+ * waiter as "done, and signal any thread(s) waiting on it. An itx can
+ * be skipped (and not committed to an lwb) for a variety of reasons,
+ * one of them being that the itx was committed via spa_sync(), prior to
+ * it being committed to an lwb; this can happen if a thread calling
+ * zil_commit() is racing with spa_sync().
+ */
+static void
+zil_commit_waiter_skip(zil_commit_waiter_t *zcw)
{
- const uint64_t v1 = ((zil_vdev_node_t *)x1)->zv_vdev;
- const uint64_t v2 = ((zil_vdev_node_t *)x2)->zv_vdev;
+ mutex_enter(&zcw->zcw_lock);
+ ASSERT3B(zcw->zcw_done, ==, B_FALSE);
+ zcw->zcw_done = B_TRUE;
+ cv_broadcast(&zcw->zcw_cv);
+ mutex_exit(&zcw->zcw_lock);
+}
- return (AVL_CMP(v1, v2));
+/*
+ * This function is used when the given waiter is to be linked into an
+ * lwb's "lwb_waiter" list; i.e. when the itx is committed to the lwb.
+ * At this point, the waiter will no longer be referenced by the itx,
+ * and instead, will be referenced by the lwb.
+ */
+static void
+zil_commit_waiter_link_lwb(zil_commit_waiter_t *zcw, lwb_t *lwb)
+{
+ mutex_enter(&zcw->zcw_lock);
+ ASSERT(!list_link_active(&zcw->zcw_node));
+ ASSERT3P(zcw->zcw_lwb, ==, NULL);
+ ASSERT3P(lwb, !=, NULL);
+ ASSERT(lwb->lwb_state == LWB_STATE_OPENED ||
+ lwb->lwb_state == LWB_STATE_ISSUED);
+
+ list_insert_tail(&lwb->lwb_waiters, zcw);
+ zcw->zcw_lwb = lwb;
+ mutex_exit(&zcw->zcw_lock);
+}
+
+/*
+ * This function is used when zio_alloc_zil() fails to allocate a ZIL
+ * block, and the given waiter must be linked to the "nolwb waiters"
+ * list inside of zil_process_commit_list().
+ */
+static void
+zil_commit_waiter_link_nolwb(zil_commit_waiter_t *zcw, list_t *nolwb)
+{
+ mutex_enter(&zcw->zcw_lock);
+ ASSERT(!list_link_active(&zcw->zcw_node));
+ ASSERT3P(zcw->zcw_lwb, ==, NULL);
+ list_insert_tail(nolwb, zcw);
+ mutex_exit(&zcw->zcw_lock);
}
void
-zil_add_block(zilog_t *zilog, const blkptr_t *bp)
+zil_lwb_add_block(lwb_t *lwb, const blkptr_t *bp)
{
- avl_tree_t *t = &zilog->zl_vdev_tree;
+ avl_tree_t *t = &lwb->lwb_vdev_tree;
avl_index_t where;
zil_vdev_node_t *zv, zvsearch;
int ndvas = BP_GET_NDVAS(bp);
@@ -841,14 +979,7 @@ zil_add_block(zilog_t *zilog, const blkptr_t *bp)
if (zfs_nocacheflush)
return;
- ASSERT(zilog->zl_writer);
-
- /*
- * Even though we're zl_writer, we still need a lock because the
- * zl_get_data() callbacks may have dmu_sync() done callbacks
- * that will run concurrently.
- */
- mutex_enter(&zilog->zl_vdev_lock);
+ mutex_enter(&lwb->lwb_vdev_lock);
for (i = 0; i < ndvas; i++) {
zvsearch.zv_vdev = DVA_GET_VDEV(&bp->blk_dva[i]);
if (avl_find(t, &zvsearch, &where) == NULL) {
@@ -857,56 +988,122 @@ zil_add_block(zilog_t *zilog, const blkptr_t *bp)
avl_insert(t, zv, where);
}
}
- mutex_exit(&zilog->zl_vdev_lock);
+ mutex_exit(&lwb->lwb_vdev_lock);
}
+void
+zil_lwb_add_txg(lwb_t *lwb, uint64_t txg)
+{
+ lwb->lwb_max_txg = MAX(lwb->lwb_max_txg, txg);
+}
+
+/*
+ * This function is a called after all VDEVs associated with a given lwb
+ * write have completed their DKIOCFLUSHWRITECACHE command; or as soon
+ * as the lwb write completes, if "zfs_nocacheflush" is set.
+ *
+ * The intention is for this function to be called as soon as the
+ * contents of an lwb are considered "stable" on disk, and will survive
+ * any sudden loss of power. At this point, any threads waiting for the
+ * lwb to reach this state are signalled, and the "waiter" structures
+ * are marked "done".
+ */
static void
-zil_flush_vdevs(zilog_t *zilog)
+zil_lwb_flush_vdevs_done(zio_t *zio)
{
- spa_t *spa = zilog->zl_spa;
- avl_tree_t *t = &zilog->zl_vdev_tree;
- void *cookie = NULL;
- zil_vdev_node_t *zv;
- zio_t *zio;
+ lwb_t *lwb = zio->io_private;
+ zilog_t *zilog = lwb->lwb_zilog;
+ dmu_tx_t *tx = lwb->lwb_tx;
+ zil_commit_waiter_t *zcw;
+ itx_t *itx;
+
+ spa_config_exit(zilog->zl_spa, SCL_STATE, lwb);
+
+ zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
- ASSERT(zilog->zl_writer);
+ mutex_enter(&zilog->zl_lock);
/*
- * We don't need zl_vdev_lock here because we're the zl_writer,
- * and all zl_get_data() callbacks are done.
+ * Ensure the lwb buffer pointer is cleared before releasing the
+ * txg. If we have had an allocation failure and the txg is
+ * waiting to sync then we want zil_sync() to remove the lwb so
+ * that it's not picked up as the next new one in
+ * zil_process_commit_list(). zil_sync() will only remove the
+ * lwb if lwb_buf is null.
*/
- if (avl_numnodes(t) == 0)
- return;
+ lwb->lwb_buf = NULL;
+ lwb->lwb_tx = NULL;
- spa_config_enter(spa, SCL_STATE, FTAG, RW_READER);
+ ASSERT3U(lwb->lwb_issued_timestamp, >, 0);
+ zilog->zl_last_lwb_latency = gethrtime() - lwb->lwb_issued_timestamp;
- zio = zio_root(spa, NULL, NULL, ZIO_FLAG_CANFAIL);
+ lwb->lwb_root_zio = NULL;
+ lwb->lwb_state = LWB_STATE_DONE;
- while ((zv = avl_destroy_nodes(t, &cookie)) != NULL) {
- vdev_t *vd = vdev_lookup_top(spa, zv->zv_vdev);
- if (vd != NULL)
- zio_flush(zio, vd);
- kmem_free(zv, sizeof (*zv));
+ if (zilog->zl_last_lwb_opened == lwb) {
+ /*
+ * Remember the highest committed log sequence number
+ * for ztest. We only update this value when all the log
+ * writes succeeded, because ztest wants to ASSERT that
+ * it got the whole log chain.
+ */
+ zilog->zl_commit_lr_seq = zilog->zl_lr_seq;
+ }
+
+ while ((itx = list_head(&lwb->lwb_itxs)) != NULL) {
+ list_remove(&lwb->lwb_itxs, itx);
+ zil_itx_destroy(itx);
+ }
+
+ while ((zcw = list_head(&lwb->lwb_waiters)) != NULL) {
+ mutex_enter(&zcw->zcw_lock);
+
+ ASSERT(list_link_active(&zcw->zcw_node));
+ list_remove(&lwb->lwb_waiters, zcw);
+
+ ASSERT3P(zcw->zcw_lwb, ==, lwb);
+ zcw->zcw_lwb = NULL;
+
+ zcw->zcw_zio_error = zio->io_error;
+
+ ASSERT3B(zcw->zcw_done, ==, B_FALSE);
+ zcw->zcw_done = B_TRUE;
+ cv_broadcast(&zcw->zcw_cv);
+
+ mutex_exit(&zcw->zcw_lock);
}
+ mutex_exit(&zilog->zl_lock);
+
/*
- * Wait for all the flushes to complete. Not all devices actually
- * support the DKIOCFLUSHWRITECACHE ioctl, so it's OK if it fails.
+ * Now that we've written this log block, we have a stable pointer
+ * to the next block in the chain, so it's OK to let the txg in
+ * which we allocated the next block sync.
*/
- (void) zio_wait(zio);
-
- spa_config_exit(spa, SCL_STATE, FTAG);
+ dmu_tx_commit(tx);
}
/*
- * Function called when a log block write completes
+ * This is called when an lwb write completes. This means, this specific
+ * lwb was written to disk, and all dependent lwb have also been
+ * written to disk.
+ *
+ * At this point, a DKIOCFLUSHWRITECACHE command hasn't been issued to
+ * the VDEVs involved in writing out this specific lwb. The lwb will be
+ * "done" once zil_lwb_flush_vdevs_done() is called, which occurs in the
+ * zio completion callback for the lwb's root zio.
*/
static void
zil_lwb_write_done(zio_t *zio)
{
lwb_t *lwb = zio->io_private;
+ spa_t *spa = zio->io_spa;
zilog_t *zilog = lwb->lwb_zilog;
- dmu_tx_t *tx = lwb->lwb_tx;
+ avl_tree_t *t = &lwb->lwb_vdev_tree;
+ void *cookie = NULL;
+ zil_vdev_node_t *zv;
+
+ ASSERT3S(spa_config_held(spa, SCL_STATE, RW_READER), !=, 0);
ASSERT(BP_GET_COMPRESS(zio->io_bp) == ZIO_COMPRESS_OFF);
ASSERT(BP_GET_TYPE(zio->io_bp) == DMU_OT_INTENT_LOG);
@@ -916,69 +1113,121 @@ zil_lwb_write_done(zio_t *zio)
ASSERT(!BP_IS_HOLE(zio->io_bp));
ASSERT(BP_GET_FILL(zio->io_bp) == 0);
- /*
- * Ensure the lwb buffer pointer is cleared before releasing
- * the txg. If we have had an allocation failure and
- * the txg is waiting to sync then we want want zil_sync()
- * to remove the lwb so that it's not picked up as the next new
- * one in zil_commit_writer(). zil_sync() will only remove
- * the lwb if lwb_buf is null.
- */
abd_put(zio->io_abd);
- zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
+
+ ASSERT3S(lwb->lwb_state, ==, LWB_STATE_ISSUED);
+
mutex_enter(&zilog->zl_lock);
- lwb->lwb_zio = NULL;
+ lwb->lwb_write_zio = NULL;
lwb->lwb_fastwrite = FALSE;
- lwb->lwb_buf = NULL;
- lwb->lwb_tx = NULL;
mutex_exit(&zilog->zl_lock);
+ if (avl_numnodes(t) == 0)
+ return;
+
/*
- * Now that we've written this log block, we have a stable pointer
- * to the next block in the chain, so it's OK to let the txg in
- * which we allocated the next block sync.
+ * If there was an IO error, we're not going to call zio_flush()
+ * on these vdevs, so we simply empty the tree and free the
+ * nodes. We avoid calling zio_flush() since there isn't any
+ * good reason for doing so, after the lwb block failed to be
+ * written out.
*/
- dmu_tx_commit(tx);
+ if (zio->io_error != 0) {
+ while ((zv = avl_destroy_nodes(t, &cookie)) != NULL)
+ kmem_free(zv, sizeof (*zv));
+ return;
+ }
+
+ while ((zv = avl_destroy_nodes(t, &cookie)) != NULL) {
+ vdev_t *vd = vdev_lookup_top(spa, zv->zv_vdev);
+ if (vd != NULL)
+ zio_flush(lwb->lwb_root_zio, vd);
+ kmem_free(zv, sizeof (*zv));
+ }
}
/*
- * Initialize the io for a log block.
+ * This function's purpose is to "open" an lwb such that it is ready to
+ * accept new itxs being committed to it. To do this, the lwb's zio
+ * structures are created, and linked to the lwb. This function is
+ * idempotent; if the passed in lwb has already been opened, this
+ * function is essentially a no-op.
*/
static void
-zil_lwb_write_init(zilog_t *zilog, lwb_t *lwb)
+zil_lwb_write_open(zilog_t *zilog, lwb_t *lwb)
{
zbookmark_phys_t zb;
zio_priority_t prio;
+ ASSERT(MUTEX_HELD(&zilog->zl_writer_lock));
+ ASSERT3P(lwb, !=, NULL);
+ EQUIV(lwb->lwb_root_zio == NULL, lwb->lwb_state == LWB_STATE_CLOSED);
+ EQUIV(lwb->lwb_root_zio != NULL, lwb->lwb_state == LWB_STATE_OPENED);
+
SET_BOOKMARK(&zb, lwb->lwb_blk.blk_cksum.zc_word[ZIL_ZC_OBJSET],
ZB_ZIL_OBJECT, ZB_ZIL_LEVEL,
lwb->lwb_blk.blk_cksum.zc_word[ZIL_ZC_SEQ]);
- if (zilog->zl_root_zio == NULL) {
- zilog->zl_root_zio = zio_root(zilog->zl_spa, NULL, NULL,
- ZIO_FLAG_CANFAIL);
- }
-
/* Lock so zil_sync() doesn't fastwrite_unmark after zio is created */
mutex_enter(&zilog->zl_lock);
- if (lwb->lwb_zio == NULL) {
+ if (lwb->lwb_root_zio == NULL) {
abd_t *lwb_abd = abd_get_from_buf(lwb->lwb_buf,
BP_GET_LSIZE(&lwb->lwb_blk));
+
if (!lwb->lwb_fastwrite) {
metaslab_fastwrite_mark(zilog->zl_spa, &lwb->lwb_blk);
lwb->lwb_fastwrite = 1;
}
+
if (!lwb->lwb_slog || zilog->zl_cur_used <= zil_slog_bulk)
prio = ZIO_PRIORITY_SYNC_WRITE;
else
prio = ZIO_PRIORITY_ASYNC_WRITE;
- lwb->lwb_zio = zio_rewrite(zilog->zl_root_zio, zilog->zl_spa,
- 0, &lwb->lwb_blk, lwb_abd, BP_GET_LSIZE(&lwb->lwb_blk),
- zil_lwb_write_done, lwb, prio,
- ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE |
+
+ lwb->lwb_root_zio = zio_root(zilog->zl_spa,
+ zil_lwb_flush_vdevs_done, lwb, ZIO_FLAG_CANFAIL);
+ ASSERT3P(lwb->lwb_root_zio, !=, NULL);
+
+ lwb->lwb_write_zio = zio_rewrite(lwb->lwb_root_zio,
+ zilog->zl_spa, 0, &lwb->lwb_blk, lwb_abd,
+ BP_GET_LSIZE(&lwb->lwb_blk), zil_lwb_write_done, lwb,
+ prio, ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE |
ZIO_FLAG_FASTWRITE, &zb);
+ ASSERT3P(lwb->lwb_write_zio, !=, NULL);
+
+ lwb->lwb_state = LWB_STATE_OPENED;
+
+ /*
+ * The zilog's "zl_last_lwb_opened" field is used to
+ * build the lwb/zio dependency chain, which is used to
+ * preserve the ordering of lwb completions that is
+ * required by the semantics of the ZIL. Each new lwb
+ * zio becomes a parent of the "previous" lwb zio, such
+ * that the new lwb's zio cannot complete until the
+ * "previous" lwb's zio completes.
+ *
+ * This is required by the semantics of zil_commit();
+ * the commit waiters attached to the lwbs will be woken
+ * in the lwb zio's completion callback, so this zio
+ * dependency graph ensures the waiters are woken in the
+ * correct order (the same order the lwbs were created).
+ */
+ lwb_t *last_lwb_opened = zilog->zl_last_lwb_opened;
+ if (last_lwb_opened != NULL &&
+ last_lwb_opened->lwb_state != LWB_STATE_DONE) {
+ ASSERT(last_lwb_opened->lwb_state == LWB_STATE_OPENED ||
+ last_lwb_opened->lwb_state == LWB_STATE_ISSUED);
+ ASSERT3P(last_lwb_opened->lwb_root_zio, !=, NULL);
+ zio_add_child(lwb->lwb_root_zio,
+ last_lwb_opened->lwb_root_zio);
+ }
+ zilog->zl_last_lwb_opened = lwb;
}
mutex_exit(&zilog->zl_lock);
+
+ ASSERT3P(lwb->lwb_root_zio, !=, NULL);
+ ASSERT3P(lwb->lwb_write_zio, !=, NULL);
+ ASSERT3S(lwb->lwb_state, ==, LWB_STATE_OPENED);
}
/*
@@ -1000,7 +1249,7 @@ uint64_t zil_block_buckets[] = {
* Calls are serialized.
*/
static lwb_t *
-zil_lwb_write_start(zilog_t *zilog, lwb_t *lwb)
+zil_lwb_write_issue(zilog_t *zilog, lwb_t *lwb)
{
lwb_t *nlwb = NULL;
zil_chain_t *zilc;
@@ -1012,6 +1261,11 @@ zil_lwb_write_start(zilog_t *zilog, lwb_t *lwb)
int i, error;
boolean_t slog;
+ ASSERT(MUTEX_HELD(&zilog->zl_writer_lock));
+ ASSERT3P(lwb->lwb_root_zio, !=, NULL);
+ ASSERT3P(lwb->lwb_write_zio, !=, NULL);
+ ASSERT3S(lwb->lwb_state, ==, LWB_STATE_OPENED);
+
if (BP_GET_CHECKSUM(&lwb->lwb_blk) == ZIO_CHECKSUM_ZILOG2) {
zilc = (zil_chain_t *)lwb->lwb_buf;
bp = &zilc->zc_next_blk;
@@ -1031,6 +1285,7 @@ zil_lwb_write_start(zilog_t *zilog, lwb_t *lwb)
* We dirty the dataset to ensure that zil_sync() will be called
* to clean up in the event of allocation failure or I/O failure.
*/
+
tx = dmu_tx_create(zilog->zl_os);
/*
@@ -1046,7 +1301,7 @@ zil_lwb_write_start(zilog_t *zilog, lwb_t *lwb)
*/
error = dmu_tx_assign(tx, TXG_WAITED);
if (error != 0) {
- ASSERT3S(error, ==, EIO);
+ ASSERT(error == EIO || error == ERESTART);
dmu_tx_abort(tx);
return (NULL);
}
@@ -1097,19 +1352,16 @@ zil_lwb_write_start(zilog_t *zilog, lwb_t *lwb)
bp->blk_cksum.zc_word[ZIL_ZC_SEQ]++;
/*
- * Allocate a new log write buffer (lwb).
+ * Allocate a new log write block (lwb).
*/
nlwb = zil_alloc_lwb(zilog, bp, slog, txg, TRUE);
-
- /* Record the block for later vdev flushing */
- zil_add_block(zilog, &lwb->lwb_blk);
}
if (BP_GET_CHECKSUM(&lwb->lwb_blk) == ZIO_CHECKSUM_ZILOG2) {
/* For Slim ZIL only write what is used. */
wsz = P2ROUNDUP_TYPED(lwb->lwb_nused, ZIL_MIN_BLKSZ, uint64_t);
ASSERT3U(wsz, <=, lwb->lwb_sz);
- zio_shrink(lwb->lwb_zio, wsz);
+ zio_shrink(lwb->lwb_write_zio, wsz);
} else {
wsz = lwb->lwb_sz;
@@ -1124,7 +1376,14 @@ zil_lwb_write_start(zilog_t *zilog, lwb_t *lwb)
*/
bzero(lwb->lwb_buf + lwb->lwb_nused, wsz - lwb->lwb_nused);
- zio_nowait(lwb->lwb_zio); /* Kick off the write for the old log block */
+ spa_config_enter(zilog->zl_spa, SCL_STATE, lwb, RW_READER);
+
+ zil_lwb_add_block(lwb, &lwb->lwb_blk);
+ lwb->lwb_issued_timestamp = gethrtime();
+ lwb->lwb_state = LWB_STATE_ISSUED;
+
+ zio_nowait(lwb->lwb_root_zio);
+ zio_nowait(lwb->lwb_write_zio);
/*
* If there was an allocation failure then nlwb will be null which
@@ -1141,13 +1400,33 @@ zil_lwb_commit(zilog_t *zilog, itx_t *itx, lwb_t *lwb)
char *lr_buf;
uint64_t dlen, dnow, lwb_sp, reclen, txg;
- if (lwb == NULL)
- return (NULL);
+ ASSERT(MUTEX_HELD(&zilog->zl_writer_lock));
+ ASSERT3P(lwb, !=, NULL);
+ ASSERT3P(lwb->lwb_buf, !=, NULL);
+
+ zil_lwb_write_open(zilog, lwb);
- ASSERT(lwb->lwb_buf != NULL);
+ lrc = &itx->itx_lr;
+ lrw = (lr_write_t *)lrc;
+
+ /*
+ * A commit itx doesn't represent any on-disk state; instead
+ * it's simply used as a place holder on the commit list, and
+ * provides a mechanism for attaching a "commit waiter" onto the
+ * correct lwb (such that the waiter can be signalled upon
+ * completion of that lwb). Thus, we don't process this itx's
+ * log record if it's a commit itx (these itx's don't have log
+ * records), and instead link the itx's waiter onto the lwb's
+ * list of waiters.
+ *
+ * For more details, see the comment above zil_commit().
+ */
+ if (lrc->lrc_txtype == TX_COMMIT) {
+ zil_commit_waiter_link_lwb(itx->itx_private, lwb);
+ itx->itx_private = NULL;
+ return (lwb);
+ }
- lrc = &itx->itx_lr; /* Common log record inside itx. */
- lrw = (lr_write_t *)lrc; /* Write log record inside itx. */
if (lrc->lrc_txtype == TX_WRITE && itx->itx_wr_state == WR_NEED_COPY) {
dlen = P2ROUNDUP_TYPED(
lrw->lr_length, sizeof (uint64_t), uint64_t);
@@ -1158,7 +1437,7 @@ zil_lwb_commit(zilog_t *zilog, itx_t *itx, lwb_t *lwb)
zilog->zl_cur_used += (reclen + dlen);
txg = lrc->lrc_txg;
- zil_lwb_write_init(zilog, lwb);
+ ASSERT3U(zilog->zl_cur_used, <, UINT64_MAX - (reclen + dlen));
cont:
/*
@@ -1169,10 +1448,10 @@ cont:
if (reclen > lwb_sp || (reclen + dlen > lwb_sp &&
lwb_sp < ZIL_MAX_WASTE_SPACE && (dlen % ZIL_MAX_LOG_DATA == 0 ||
lwb_sp < reclen + dlen % ZIL_MAX_LOG_DATA))) {
- lwb = zil_lwb_write_start(zilog, lwb);
+ lwb = zil_lwb_write_issue(zilog, lwb);
if (lwb == NULL)
return (NULL);
- zil_lwb_write_init(zilog, lwb);
+ zil_lwb_write_open(zilog, lwb);
ASSERT(LWB_EMPTY(lwb));
lwb_sp = lwb->lwb_sz - lwb->lwb_nused;
ASSERT3U(reclen + MIN(dlen, sizeof (uint64_t)), <=, lwb_sp);
@@ -1210,14 +1489,31 @@ cont:
ZIL_STAT_INCR(zil_itx_needcopy_bytes,
lrw->lr_length);
} else {
- ASSERT(itx->itx_wr_state == WR_INDIRECT);
+ ASSERT3S(itx->itx_wr_state, ==, WR_INDIRECT);
dbuf = NULL;
ZIL_STAT_BUMP(zil_itx_indirect_count);
ZIL_STAT_INCR(zil_itx_indirect_bytes,
lrw->lr_length);
}
- error = zilog->zl_get_data(
- itx->itx_private, lrwb, dbuf, lwb->lwb_zio);
+
+ /*
+ * We pass in the "lwb_write_zio" rather than
+ * "lwb_root_zio" so that the "lwb_write_zio"
+ * becomes the parent of any zio's created by
+ * the "zl_get_data" callback. The vdevs are
+ * flushed after the "lwb_write_zio" completes,
+ * so we want to make sure that completion
+ * callback waits for these additional zio's,
+ * such that the vdevs used by those zio's will
+ * be included in the lwb's vdev tree, and those
+ * vdevs will be properly flushed. If we passed
+ * in "lwb_root_zio" here, then these additional
+ * vdevs may not be flushed; e.g. if these zio's
+ * completed after "lwb_write_zio" completed.
+ */
+ error = zilog->zl_get_data(itx->itx_private,
+ lrwb, dbuf, lwb, lwb->lwb_write_zio);
+
if (error == EIO) {
txg_wait_synced(zilog->zl_dmu_pool, txg);
return (lwb);
@@ -1236,9 +1532,11 @@ cont:
* equal to the itx sequence number because not all transactions
* are synchronous, and sometimes spa_sync() gets there first.
*/
- lrcb->lrc_seq = ++zilog->zl_lr_seq; /* we are single threaded */
+ lrcb->lrc_seq = ++zilog->zl_lr_seq;
lwb->lwb_nused += reclen + dnow;
- lwb->lwb_max_txg = MAX(lwb->lwb_max_txg, txg);
+
+ zil_lwb_add_txg(lwb, txg);
+
ASSERT3U(lwb->lwb_nused, <=, lwb->lwb_sz);
ASSERT0(P2PHASE(lwb->lwb_nused, sizeof (uint64_t)));
@@ -1275,6 +1573,12 @@ zil_itx_create(uint64_t txtype, size_t lrsize)
void
zil_itx_destroy(itx_t *itx)
{
+ IMPLY(itx->itx_lr.lrc_txtype == TX_COMMIT, itx->itx_callback == NULL);
+ IMPLY(itx->itx_callback != NULL, itx->itx_lr.lrc_txtype != TX_COMMIT);
+
+ if (itx->itx_callback != NULL)
+ itx->itx_callback(itx->itx_callback_data);
+
zio_data_buf_free(itx, itx->itx_size);
}
@@ -1293,8 +1597,28 @@ zil_itxg_clean(itxs_t *itxs)
list = &itxs->i_sync_list;
while ((itx = list_head(list)) != NULL) {
- if (itx->itx_callback != NULL)
- itx->itx_callback(itx->itx_callback_data);
+ /*
+ * In the general case, commit itxs will not be found
+ * here, as they'll be committed to an lwb via
+ * zil_lwb_commit(), and free'd in that function. Having
+ * said that, it is still possible for commit itxs to be
+ * found here, due to the following race:
+ *
+ * - a thread calls zil_commit() which assigns the
+ * commit itx to a per-txg i_sync_list
+ * - zil_itxg_clean() is called (e.g. via spa_sync())
+ * while the waiter is still on the i_sync_list
+ *
+ * There's nothing to prevent syncing the txg while the
+ * waiter is on the i_sync_list. This normally doesn't
+ * happen because spa_sync() is slower than zil_commit(),
+ * but if zil_commit() calls txg_wait_synced() (e.g.
+ * because zil_create() or zil_commit_writer_stall() is
+ * called) we will hit this case.
+ */
+ if (itx->itx_lr.lrc_txtype == TX_COMMIT)
+ zil_commit_waiter_skip(itx->itx_private);
+
list_remove(list, itx);
zil_itx_destroy(itx);
}
@@ -1304,9 +1628,9 @@ zil_itxg_clean(itxs_t *itxs)
while ((ian = avl_destroy_nodes(t, &cookie)) != NULL) {
list = &ian->ia_list;
while ((itx = list_head(list)) != NULL) {
- if (itx->itx_callback != NULL)
- itx->itx_callback(itx->itx_callback_data);
list_remove(list, itx);
+ /* commit itxs should never be on the async lists. */
+ ASSERT3U(itx->itx_lr.lrc_txtype, !=, TX_COMMIT);
zil_itx_destroy(itx);
}
list_destroy(list);
@@ -1366,9 +1690,9 @@ zil_remove_async(zilog_t *zilog, uint64_t oid)
mutex_exit(&itxg->itxg_lock);
}
while ((itx = list_head(&clean_list)) != NULL) {
- if (itx->itx_callback != NULL)
- itx->itx_callback(itx->itx_callback_data);
list_remove(&clean_list, itx);
+ /* commit itxs should never be on the async lists. */
+ ASSERT3U(itx->itx_lr.lrc_txtype, !=, TX_COMMIT);
zil_itx_destroy(itx);
}
list_destroy(&clean_list);
@@ -1448,7 +1772,14 @@ zil_itx_assign(zilog_t *zilog, itx_t *itx, dmu_tx_t *tx)
}
itx->itx_lr.lrc_txg = dmu_tx_get_txg(tx);
- zilog_dirty(zilog, txg);
+
+ /*
+ * We don't want to dirty the ZIL using ZILTEST_TXG, because
+ * zil_clean() will never be called using ZILTEST_TXG. Thus, we
+ * need to be careful to always dirty the ZIL using the "real"
+ * TXG (not itxg_txg) even when the SPA is frozen.
+ */
+ zilog_dirty(zilog, dmu_tx_get_txg(tx));
mutex_exit(&itxg->itxg_lock);
/* Release the old itxs now we've dropped the lock */
@@ -1469,6 +1800,8 @@ zil_clean(zilog_t *zilog, uint64_t synced_txg)
itxg_t *itxg = &zilog->zl_itxg[synced_txg & TXG_MASK];
itxs_t *clean_me;
+ ASSERT3U(synced_txg, <, ZILTEST_TXG);
+
mutex_enter(&itxg->itxg_lock);
if (itxg->itxg_itxs == NULL || itxg->itxg_txg == ZILTEST_TXG) {
mutex_exit(&itxg->itxg_lock);
@@ -1495,7 +1828,8 @@ zil_clean(zilog_t *zilog, uint64_t synced_txg)
}
/*
- * Get the list of itxs to commit into zl_itx_commit_list.
+ * This function will traverse the queue of itxs that need to be
+ * committed, and move them onto the ZIL's zl_itx_commit_list.
*/
static void
zil_get_commit_list(zilog_t *zilog)
@@ -1503,6 +1837,8 @@ zil_get_commit_list(zilog_t *zilog)
uint64_t otxg, txg;
list_t *commit_list = &zilog->zl_itx_commit_list;
+ ASSERT(MUTEX_HELD(&zilog->zl_writer_lock));
+
if (spa_freeze_txg(zilog->zl_spa) != UINT64_MAX) /* ziltest support */
otxg = ZILTEST_TXG;
else
@@ -1594,154 +1930,809 @@ zil_async_to_sync(zilog_t *zilog, uint64_t foid)
}
}
+/*
+ * This function will prune commit itxs that are at the head of the
+ * commit list (it won't prune past the first non-commit itx), and
+ * either: a) attach them to the last lwb that's still pending
+ * completion, or b) skip them altogether.
+ *
+ * This is used as a performance optimization to prevent commit itxs
+ * from generating new lwbs when it's unnecessary to do so.
+ */
static void
-zil_commit_writer(zilog_t *zilog)
+zil_prune_commit_list(zilog_t *zilog)
{
- uint64_t txg;
itx_t *itx;
- lwb_t *lwb;
- spa_t *spa = zilog->zl_spa;
- int error = 0;
- ASSERT(zilog->zl_root_zio == NULL);
+ ASSERT(MUTEX_HELD(&zilog->zl_writer_lock));
- mutex_exit(&zilog->zl_lock);
+ while ((itx = list_head(&zilog->zl_itx_commit_list)) != NULL) {
+ lr_t *lrc = &itx->itx_lr;
+ if (lrc->lrc_txtype != TX_COMMIT)
+ break;
- zil_get_commit_list(zilog);
+ mutex_enter(&zilog->zl_lock);
+
+ lwb_t *last_lwb = zilog->zl_last_lwb_opened;
+ if (last_lwb == NULL || last_lwb->lwb_state == LWB_STATE_DONE) {
+ /*
+ * All of the itxs this waiter was waiting on
+ * must have already completed (or there were
+ * never any itx's for it to wait on), so it's
+ * safe to skip this waiter and mark it done.
+ */
+ zil_commit_waiter_skip(itx->itx_private);
+ } else {
+ zil_commit_waiter_link_lwb(itx->itx_private, last_lwb);
+ itx->itx_private = NULL;
+ }
+
+ mutex_exit(&zilog->zl_lock);
+
+ list_remove(&zilog->zl_itx_commit_list, itx);
+ zil_itx_destroy(itx);
+ }
+
+ IMPLY(itx != NULL, itx->itx_lr.lrc_txtype != TX_COMMIT);
+}
+
+static void
+zil_commit_writer_stall(zilog_t *zilog)
+{
+ /*
+ * When zio_alloc_zil() fails to allocate the next lwb block on
+ * disk, we must call txg_wait_synced() to ensure all of the
+ * lwbs in the zilog's zl_lwb_list are synced and then freed (in
+ * zil_sync()), such that any subsequent ZIL writer (i.e. a call
+ * to zil_process_commit_list()) will have to call zil_create(),
+ * and start a new ZIL chain.
+ *
+ * Since zil_alloc_zil() failed, the lwb that was previously
+ * issued does not have a pointer to the "next" lwb on disk.
+ * Thus, if another ZIL writer thread was to allocate the "next"
+ * on-disk lwb, that block could be leaked in the event of a
+ * crash (because the previous lwb on-disk would not point to
+ * it).
+ *
+ * We must hold the zilog's zl_writer_lock while we do this, to
+ * ensure no new threads enter zil_process_commit_list() until
+ * all lwb's in the zl_lwb_list have been synced and freed
+ * (which is achieved via the txg_wait_synced() call).
+ */
+ ASSERT(MUTEX_HELD(&zilog->zl_writer_lock));
+ txg_wait_synced(zilog->zl_dmu_pool, 0);
+ ASSERT3P(list_tail(&zilog->zl_lwb_list), ==, NULL);
+}
+
+/*
+ * This function will traverse the commit list, creating new lwbs as
+ * needed, and committing the itxs from the commit list to these newly
+ * created lwbs. Additionally, as a new lwb is created, the previous
+ * lwb will be issued to the zio layer to be written to disk.
+ */
+static void
+zil_process_commit_list(zilog_t *zilog)
+{
+ spa_t *spa = zilog->zl_spa;
+ list_t nolwb_itxs;
+ list_t nolwb_waiters;
+ lwb_t *lwb;
+ itx_t *itx;
+
+ ASSERT(MUTEX_HELD(&zilog->zl_writer_lock));
/*
* Return if there's nothing to commit before we dirty the fs by
* calling zil_create().
*/
- if (list_head(&zilog->zl_itx_commit_list) == NULL) {
- mutex_enter(&zilog->zl_lock);
+ if (list_head(&zilog->zl_itx_commit_list) == NULL)
return;
- }
- if (zilog->zl_suspend) {
- lwb = NULL;
+ list_create(&nolwb_itxs, sizeof (itx_t), offsetof(itx_t, itx_node));
+ list_create(&nolwb_waiters, sizeof (zil_commit_waiter_t),
+ offsetof(zil_commit_waiter_t, zcw_node));
+
+ lwb = list_tail(&zilog->zl_lwb_list);
+ if (lwb == NULL) {
+ lwb = zil_create(zilog);
} else {
- lwb = list_tail(&zilog->zl_lwb_list);
- if (lwb == NULL)
- lwb = zil_create(zilog);
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_ISSUED);
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_DONE);
}
- DTRACE_PROBE1(zil__cw1, zilog_t *, zilog);
- for (itx = list_head(&zilog->zl_itx_commit_list); itx != NULL;
- itx = list_next(&zilog->zl_itx_commit_list, itx)) {
- txg = itx->itx_lr.lrc_txg;
+ while ((itx = list_head(&zilog->zl_itx_commit_list)) != NULL) {
+ lr_t *lrc = &itx->itx_lr;
+ uint64_t txg = lrc->lrc_txg;
+
ASSERT3U(txg, !=, 0);
+ if (lrc->lrc_txtype == TX_COMMIT) {
+ DTRACE_PROBE2(zil__process__commit__itx,
+ zilog_t *, zilog, itx_t *, itx);
+ } else {
+ DTRACE_PROBE2(zil__process__normal__itx,
+ zilog_t *, zilog, itx_t *, itx);
+ }
+
+ list_remove(&zilog->zl_itx_commit_list, itx);
+
/*
* This is inherently racy and may result in us writing
- * out a log block for a txg that was just synced. This is
- * ok since we'll end cleaning up that log block the next
- * time we call zil_sync().
+ * out a log block for a txg that was just synced. This
+ * is ok since we'll end cleaning up that log block the
+ * next time we call zil_sync().
*/
- if (txg > spa_last_synced_txg(spa) || txg > spa_freeze_txg(spa))
- lwb = zil_lwb_commit(zilog, itx, lwb);
+ boolean_t synced = txg <= spa_last_synced_txg(spa);
+ boolean_t frozen = txg > spa_freeze_txg(spa);
+
+ if (!synced || frozen) {
+ if (lwb != NULL) {
+ lwb = zil_lwb_commit(zilog, itx, lwb);
+
+ if (lwb == NULL)
+ list_insert_tail(&nolwb_itxs, itx);
+ else
+ list_insert_tail(&lwb->lwb_itxs, itx);
+ } else {
+ if (lrc->lrc_txtype == TX_COMMIT) {
+ zil_commit_waiter_link_nolwb(
+ itx->itx_private, &nolwb_waiters);
+ }
+
+ list_insert_tail(&nolwb_itxs, itx);
+ }
+ } else {
+ /*
+ * If this is a commit itx, then there will be a
+ * thread that is either: already waiting for
+ * it, or soon will be waiting.
+ *
+ * This itx has already been committed to disk
+ * via spa_sync() so we don't bother committing
+ * it to an lwb. As a result, we cannot use the
+ * lwb zio callback to signal the waiter and
+ * mark it as done, so we must do that here.
+ */
+ if (lrc->lrc_txtype == TX_COMMIT)
+ zil_commit_waiter_skip(itx->itx_private);
+
+ zil_itx_destroy(itx);
+ }
}
- DTRACE_PROBE1(zil__cw2, zilog_t *, zilog);
- /* write the last block out */
- if (lwb != NULL && lwb->lwb_zio != NULL)
- lwb = zil_lwb_write_start(zilog, lwb);
+ if (lwb == NULL) {
+ /*
+ * This indicates zio_alloc_zil() failed to allocate the
+ * "next" lwb on-disk. When this happens, we must stall
+ * the ZIL write pipeline; see the comment within
+ * zil_commit_writer_stall() for more details.
+ */
+ zil_commit_writer_stall(zilog);
- zilog->zl_cur_used = 0;
+ /*
+ * Additionally, we have to signal and mark the "nolwb"
+ * waiters as "done" here, since without an lwb, we
+ * can't do this via zil_lwb_flush_vdevs_done() like
+ * normal.
+ */
+ zil_commit_waiter_t *zcw;
+ while ((zcw = list_head(&nolwb_waiters)) != NULL) {
+ zil_commit_waiter_skip(zcw);
+ list_remove(&nolwb_waiters, zcw);
+ }
+
+ /*
+ * And finally, we have to destroy the itx's that
+ * couldn't be committed to an lwb; this will also call
+ * the itx's callback if one exists for the itx.
+ */
+ while ((itx = list_head(&nolwb_itxs)) != NULL) {
+ list_remove(&nolwb_itxs, itx);
+ zil_itx_destroy(itx);
+ }
+ } else {
+ ASSERT(list_is_empty(&nolwb_waiters));
+ ASSERT3P(lwb, !=, NULL);
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_ISSUED);
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_DONE);
+
+ /*
+ * At this point, the ZIL block pointed at by the "lwb"
+ * variable is in one of the following states: "closed"
+ * or "open".
+ *
+ * If its "closed", then no itxs have been committed to
+ * it, so there's no point in issueing its zio (i.e.
+ * it's "empty").
+ *
+ * If its "open" state, then it contains one or more
+ * itxs that eventually need to be committed to stable
+ * storage. In this case we intentionally do not issue
+ * the lwb's zio to disk yet, and instead rely on one of
+ * the following two mechanisms for issuing the zio:
+ *
+ * 1. Ideally, there will be more ZIL activity occuring
+ * on the system, such that this function will be
+ * immeidately called again (not necessarily by the same
+ * thread) and this lwb's zio will be issued via
+ * zil_lwb_commit(). This way, the lwb is guaranteed to
+ * be "full" when it is issued to disk, and we'll make
+ * use of the lwb's size the best we can.
+ *
+ * 2. If there isn't sufficient ZIL activity occuring on
+ * the system, such that this lwb's zio isn't issued via
+ * zil_lwb_commit(), zil_commit_waiter() will issue the
+ * lwb's zio. If this occurs, the lwb is not guaranteed
+ * to be "full" by the time its zio is issued, and means
+ * the size of the lwb was "too large" given the amount
+ * of ZIL activity occuring on the system at that time.
+ *
+ * We do this for a couple of reasons:
+ *
+ * 1. To try and reduce the number of IOPs needed to
+ * write the same number of itxs. If an lwb has space
+ * available in it's buffer for more itxs, and more itxs
+ * will be committed relatively soon (relative to the
+ * latency of performing a write), then it's beneficial
+ * to wait for these "next" itxs. This way, more itxs
+ * can be committed to stable storage with fewer writes.
+ *
+ * 2. To try and use the largest lwb block size that the
+ * incoming rate of itxs can support. Again, this is to
+ * try and pack as many itxs into as few lwbs as
+ * possible, without significantly impacting the latency
+ * of each individual itx.
+ */
+ }
+}
+
+/*
+ * This function is responsible for ensuring the passed in commit waiter
+ * (and associated commit itx) is committed to an lwb. If the waiter is
+ * not already committed to an lwb, all itxs in the zilog's queue of
+ * itxs will be processed. The assumption is the passed in waiter's
+ * commit itx will found in the queue just like the other non-commit
+ * itxs, such that when the entire queue is processed, the waiter will
+ * have been commited to an lwb.
+ *
+ * The lwb associated with the passed in waiter is not guaranteed to
+ * have been issued by the time this function completes. If the lwb is
+ * not issued, we rely on future calls to zil_commit_writer() to issue
+ * the lwb, or the timeout mechanism found in zil_commit_waiter().
+ */
+static void
+zil_commit_writer(zilog_t *zilog, zil_commit_waiter_t *zcw)
+{
+ ASSERT(!MUTEX_HELD(&zilog->zl_lock));
+ ASSERT(spa_writeable(zilog->zl_spa));
+ ASSERT0(zilog->zl_suspend);
+
+ mutex_enter(&zilog->zl_writer_lock);
+
+ if (zcw->zcw_lwb != NULL || zcw->zcw_done) {
+ /*
+ * It's possible that, while we were waiting to acquire
+ * the "zl_writer_lock", another thread committed this
+ * waiter to an lwb. If that occurs, we bail out early,
+ * without processing any of the zilog's queue of itxs.
+ *
+ * On certain workloads and system configurations, the
+ * "zl_writer_lock" can become highly contended. In an
+ * attempt to reduce this contention, we immediately drop
+ * the lock if the waiter has already been processed.
+ *
+ * We've measured this optimization to reduce CPU spent
+ * contending on this lock by up to 5%, using a system
+ * with 32 CPUs, low latency storage (~50 usec writes),
+ * and 1024 threads performing sync writes.
+ */
+ goto out;
+ }
+
+ ZIL_STAT_BUMP(zil_commit_writer_count);
+
+ zil_get_commit_list(zilog);
+ zil_prune_commit_list(zilog);
+ zil_process_commit_list(zilog);
+
+out:
+ mutex_exit(&zilog->zl_writer_lock);
+}
+
+static void
+zil_commit_waiter_timeout(zilog_t *zilog, zil_commit_waiter_t *zcw)
+{
+ ASSERT(!MUTEX_HELD(&zilog->zl_writer_lock));
+ ASSERT(MUTEX_HELD(&zcw->zcw_lock));
+ ASSERT3B(zcw->zcw_done, ==, B_FALSE);
+
+ lwb_t *lwb = zcw->zcw_lwb;
+ ASSERT3P(lwb, !=, NULL);
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_CLOSED);
/*
- * Wait if necessary for the log blocks to be on stable storage.
+ * If the lwb has already been issued by another thread, we can
+ * immediately return since there's no work to be done (the
+ * point of this function is to issue the lwb). Additionally, we
+ * do this prior to acquiring the zl_writer_lock, to avoid
+ * acquiring it when it's not necessary to do so.
*/
- if (zilog->zl_root_zio) {
- error = zio_wait(zilog->zl_root_zio);
- zilog->zl_root_zio = NULL;
- zil_flush_vdevs(zilog);
- }
+ if (lwb->lwb_state == LWB_STATE_ISSUED ||
+ lwb->lwb_state == LWB_STATE_DONE)
+ return;
- if (error || lwb == NULL)
- txg_wait_synced(zilog->zl_dmu_pool, 0);
+ /*
+ * In order to call zil_lwb_write_issue() we must hold the
+ * zilog's "zl_writer_lock". We can't simply acquire that lock,
+ * since we're already holding the commit waiter's "zcw_lock",
+ * and those two locks are aquired in the opposite order
+ * elsewhere.
+ */
+ mutex_exit(&zcw->zcw_lock);
+ mutex_enter(&zilog->zl_writer_lock);
+ mutex_enter(&zcw->zcw_lock);
- while ((itx = list_head(&zilog->zl_itx_commit_list))) {
- txg = itx->itx_lr.lrc_txg;
- ASSERT(txg);
+ /*
+ * Since we just dropped and re-acquired the commit waiter's
+ * lock, we have to re-check to see if the waiter was marked
+ * "done" during that process. If the waiter was marked "done",
+ * the "lwb" pointer is no longer valid (it can be free'd after
+ * the waiter is marked "done"), so without this check we could
+ * wind up with a use-after-free error below.
+ */
+ if (zcw->zcw_done)
+ goto out;
- if (itx->itx_callback != NULL)
- itx->itx_callback(itx->itx_callback_data);
- list_remove(&zilog->zl_itx_commit_list, itx);
- zil_itx_destroy(itx);
+ ASSERT3P(lwb, ==, zcw->zcw_lwb);
+
+ /*
+ * We've already checked this above, but since we hadn't
+ * acquired the zilog's zl_writer_lock, we have to perform this
+ * check a second time while holding the lock. We can't call
+ * zil_lwb_write_issue() if the lwb had already been issued.
+ */
+ if (lwb->lwb_state == LWB_STATE_ISSUED ||
+ lwb->lwb_state == LWB_STATE_DONE)
+ goto out;
+
+ ASSERT3S(lwb->lwb_state, ==, LWB_STATE_OPENED);
+
+ /*
+ * As described in the comments above zil_commit_waiter() and
+ * zil_process_commit_list(), we need to issue this lwb's zio
+ * since we've reached the commit waiter's timeout and it still
+ * hasn't been issued.
+ */
+ lwb_t *nlwb = zil_lwb_write_issue(zilog, lwb);
+
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_OPENED);
+
+ /*
+ * Since the lwb's zio hadn't been issued by the time this thread
+ * reached its timeout, we reset the zilog's "zl_cur_used" field
+ * to influence the zil block size selection algorithm.
+ *
+ * By having to issue the lwb's zio here, it means the size of the
+ * lwb was too large, given the incoming throughput of itxs. By
+ * setting "zl_cur_used" to zero, we communicate this fact to the
+ * block size selection algorithm, so it can take this informaiton
+ * into account, and potentially select a smaller size for the
+ * next lwb block that is allocated.
+ */
+ zilog->zl_cur_used = 0;
+
+ if (nlwb == NULL) {
+ /*
+ * When zil_lwb_write_issue() returns NULL, this
+ * indicates zio_alloc_zil() failed to allocate the
+ * "next" lwb on-disk. When this occurs, the ZIL write
+ * pipeline must be stalled; see the comment within the
+ * zil_commit_writer_stall() function for more details.
+ *
+ * We must drop the commit waiter's lock prior to
+ * calling zil_commit_writer_stall() or else we can wind
+ * up with the following deadlock:
+ *
+ * - This thread is waiting for the txg to sync while
+ * holding the waiter's lock; txg_wait_synced() is
+ * used within txg_commit_writer_stall().
+ *
+ * - The txg can't sync because it is waiting for this
+ * lwb's zio callback to call dmu_tx_commit().
+ *
+ * - The lwb's zio callback can't call dmu_tx_commit()
+ * because it's blocked trying to acquire the waiter's
+ * lock, which occurs prior to calling dmu_tx_commit()
+ */
+ mutex_exit(&zcw->zcw_lock);
+ zil_commit_writer_stall(zilog);
+ mutex_enter(&zcw->zcw_lock);
}
- mutex_enter(&zilog->zl_lock);
+out:
+ mutex_exit(&zilog->zl_writer_lock);
+ ASSERT(MUTEX_HELD(&zcw->zcw_lock));
+}
+
+/*
+ * This function is responsible for performing the following two tasks:
+ *
+ * 1. its primary responsibility is to block until the given "commit
+ * waiter" is considered "done".
+ *
+ * 2. its secondary responsibility is to issue the zio for the lwb that
+ * the given "commit waiter" is waiting on, if this function has
+ * waited "long enough" and the lwb is still in the "open" state.
+ *
+ * Given a sufficient amount of itxs being generated and written using
+ * the ZIL, the lwb's zio will be issued via the zil_lwb_commit()
+ * function. If this does not occur, this secondary responsibility will
+ * ensure the lwb is issued even if there is not other synchronous
+ * activity on the system.
+ *
+ * For more details, see zil_process_commit_list(); more specifically,
+ * the comment at the bottom of that function.
+ */
+static void
+zil_commit_waiter(zilog_t *zilog, zil_commit_waiter_t *zcw)
+{
+ ASSERT(!MUTEX_HELD(&zilog->zl_lock));
+ ASSERT(!MUTEX_HELD(&zilog->zl_writer_lock));
+ ASSERT(spa_writeable(zilog->zl_spa));
+ ASSERT0(zilog->zl_suspend);
+
+ mutex_enter(&zcw->zcw_lock);
/*
- * Remember the highest committed log sequence number for ztest.
- * We only update this value when all the log writes succeeded,
- * because ztest wants to ASSERT that it got the whole log chain.
+ * The timeout is scaled based on the lwb latency to avoid
+ * significantly impacting the latency of each individual itx.
+ * For more details, see the comment at the bottom of the
+ * zil_process_commit_list() function.
*/
- if (error == 0 && lwb != NULL)
- zilog->zl_commit_lr_seq = zilog->zl_lr_seq;
+ int pct = MAX(zfs_commit_timeout_pct, 1);
+ hrtime_t sleep = (zilog->zl_last_lwb_latency * pct) / 100;
+ hrtime_t wakeup = gethrtime() + sleep;
+ boolean_t timedout = B_FALSE;
+
+ while (!zcw->zcw_done) {
+ ASSERT(MUTEX_HELD(&zcw->zcw_lock));
+
+ lwb_t *lwb = zcw->zcw_lwb;
+
+ /*
+ * Usually, the waiter will have a non-NULL lwb field here,
+ * but it's possible for it to be NULL as a result of
+ * zil_commit() racing with spa_sync().
+ *
+ * When zil_clean() is called, it's possible for the itxg
+ * list (which may be cleaned via a taskq) to contain
+ * commit itxs. When this occurs, the commit waiters linked
+ * off of these commit itxs will not be committed to an
+ * lwb. Additionally, these commit waiters will not be
+ * marked done until zil_commit_waiter_skip() is called via
+ * zil_itxg_clean().
+ *
+ * Thus, it's possible for this commit waiter (i.e. the
+ * "zcw" variable) to be found in this "in between" state;
+ * where it's "zcw_lwb" field is NULL, and it hasn't yet
+ * been skipped, so it's "zcw_done" field is still B_FALSE.
+ */
+ IMPLY(lwb != NULL, lwb->lwb_state != LWB_STATE_CLOSED);
+
+ if (lwb != NULL && lwb->lwb_state == LWB_STATE_OPENED) {
+ ASSERT3B(timedout, ==, B_FALSE);
+
+ /*
+ * If the lwb hasn't been issued yet, then we
+ * need to wait with a timeout, in case this
+ * function needs to issue the lwb after the
+ * timeout is reached; responsibility (2) from
+ * the comment above this function.
+ */
+ clock_t timeleft = cv_timedwait_hires(&zcw->zcw_cv,
+ &zcw->zcw_lock, wakeup, USEC2NSEC(1),
+ CALLOUT_FLAG_ABSOLUTE);
+
+ if (timeleft >= 0 || zcw->zcw_done)
+ continue;
+
+ timedout = B_TRUE;
+ zil_commit_waiter_timeout(zilog, zcw);
+
+ if (!zcw->zcw_done) {
+ /*
+ * If the commit waiter has already been
+ * marked "done", it's possible for the
+ * waiter's lwb structure to have already
+ * been freed. Thus, we can only reliably
+ * make these assertions if the waiter
+ * isn't done.
+ */
+ ASSERT3P(lwb, ==, zcw->zcw_lwb);
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_OPENED);
+ }
+ } else {
+ /*
+ * If the lwb isn't open, then it must have already
+ * been issued. In that case, there's no need to
+ * use a timeout when waiting for the lwb to
+ * complete.
+ *
+ * Additionally, if the lwb is NULL, the waiter
+ * will soon be signalled and marked done via
+ * zil_clean() and zil_itxg_clean(), so no timeout
+ * is required.
+ */
+
+ IMPLY(lwb != NULL,
+ lwb->lwb_state == LWB_STATE_ISSUED ||
+ lwb->lwb_state == LWB_STATE_DONE);
+ cv_wait(&zcw->zcw_cv, &zcw->zcw_lock);
+ }
+ }
+
+ mutex_exit(&zcw->zcw_lock);
+}
+
+static zil_commit_waiter_t *
+zil_alloc_commit_waiter(void)
+{
+ zil_commit_waiter_t *zcw = kmem_cache_alloc(zil_zcw_cache, KM_SLEEP);
+
+ cv_init(&zcw->zcw_cv, NULL, CV_DEFAULT, NULL);
+ mutex_init(&zcw->zcw_lock, NULL, MUTEX_DEFAULT, NULL);
+ list_link_init(&zcw->zcw_node);
+ zcw->zcw_lwb = NULL;
+ zcw->zcw_done = B_FALSE;
+ zcw->zcw_zio_error = 0;
+
+ return (zcw);
+}
+
+static void
+zil_free_commit_waiter(zil_commit_waiter_t *zcw)
+{
+ ASSERT(!list_link_active(&zcw->zcw_node));
+ ASSERT3P(zcw->zcw_lwb, ==, NULL);
+ ASSERT3B(zcw->zcw_done, ==, B_TRUE);
+ mutex_destroy(&zcw->zcw_lock);
+ cv_destroy(&zcw->zcw_cv);
+ kmem_cache_free(zil_zcw_cache, zcw);
}
/*
- * Commit zfs transactions to stable storage.
- * If foid is 0 push out all transactions, otherwise push only those
- * for that object or might reference that object.
+ * This function is used to create a TX_COMMIT itx and assign it. This
+ * way, it will be linked into the ZIL's list of synchronous itxs, and
+ * then later committed to an lwb (or skipped) when
+ * zil_process_commit_list() is called.
+ */
+static void
+zil_commit_itx_assign(zilog_t *zilog, zil_commit_waiter_t *zcw)
+{
+ dmu_tx_t *tx = dmu_tx_create(zilog->zl_os);
+ VERIFY0(dmu_tx_assign(tx, TXG_WAIT));
+
+ itx_t *itx = zil_itx_create(TX_COMMIT, sizeof (lr_t));
+ itx->itx_sync = B_TRUE;
+ itx->itx_private = zcw;
+
+ zil_itx_assign(zilog, itx, tx);
+
+ dmu_tx_commit(tx);
+}
+
+/*
+ * Commit ZFS Intent Log transactions (itxs) to stable storage.
+ *
+ * When writing ZIL transactions to the on-disk representation of the
+ * ZIL, the itxs are committed to a Log Write Block (lwb). Multiple
+ * itxs can be committed to a single lwb. Once a lwb is written and
+ * committed to stable storage (i.e. the lwb is written, and vdevs have
+ * been flushed), each itx that was committed to that lwb is also
+ * considered to be committed to stable storage.
+ *
+ * When an itx is committed to an lwb, the log record (lr_t) contained
+ * by the itx is copied into the lwb's zio buffer, and once this buffer
+ * is written to disk, it becomes an on-disk ZIL block.
+ *
+ * As itxs are generated, they're inserted into the ZIL's queue of
+ * uncommitted itxs. The semantics of zil_commit() are such that it will
+ * block until all itxs that were in the queue when it was called, are
+ * committed to stable storage.
+ *
+ * If "foid" is zero, this means all "synchronous" and "asynchronous"
+ * itxs, for all objects in the dataset, will be committed to stable
+ * storage prior to zil_commit() returning. If "foid" is non-zero, all
+ * "synchronous" itxs for all objects, but only "asynchronous" itxs
+ * that correspond to the foid passed in, will be committed to stable
+ * storage prior to zil_commit() returning.
+ *
+ * Generally speaking, when zil_commit() is called, the consumer doesn't
+ * actually care about _all_ of the uncommitted itxs. Instead, they're
+ * simply trying to waiting for a specific itx to be committed to disk,
+ * but the interface(s) for interacting with the ZIL don't allow such
+ * fine-grained communication. A better interface would allow a consumer
+ * to create and assign an itx, and then pass a reference to this itx to
+ * zil_commit(); such that zil_commit() would return as soon as that
+ * specific itx was committed to disk (instead of waiting for _all_
+ * itxs to be committed).
+ *
+ * When a thread calls zil_commit() a special "commit itx" will be
+ * generated, along with a corresponding "waiter" for this commit itx.
+ * zil_commit() will wait on this waiter's CV, such that when the waiter
+ * is marked done, and signalled, zil_commit() will return.
+ *
+ * This commit itx is inserted into the queue of uncommitted itxs. This
+ * provides an easy mechanism for determining which itxs were in the
+ * queue prior to zil_commit() having been called, and which itxs were
+ * added after zil_commit() was called.
+ *
+ * The commit it is special; it doesn't have any on-disk representation.
+ * When a commit itx is "committed" to an lwb, the waiter associated
+ * with it is linked onto the lwb's list of waiters. Then, when that lwb
+ * completes, each waiter on the lwb's list is marked done and signalled
+ * -- allowing the thread waiting on the waiter to return from zil_commit().
+ *
+ * It's important to point out a few critical factors that allow us
+ * to make use of the commit itxs, commit waiters, per-lwb lists of
+ * commit waiters, and zio completion callbacks like we're doing:
*
- * itxs are committed in batches. In a heavily stressed zil there will be
- * a commit writer thread who is writing out a bunch of itxs to the log
- * for a set of committing threads (cthreads) in the same batch as the writer.
- * Those cthreads are all waiting on the same cv for that batch.
+ * 1. The list of waiters for each lwb is traversed, and each commit
+ * waiter is marked "done" and signalled, in the zio completion
+ * callback of the lwb's zio[*].
*
- * There will also be a different and growing batch of threads that are
- * waiting to commit (qthreads). When the committing batch completes
- * a transition occurs such that the cthreads exit and the qthreads become
- * cthreads. One of the new cthreads becomes the writer thread for the
- * batch. Any new threads arriving become new qthreads.
+ * * Actually, the waiters are signalled in the zio completion
+ * callback of the root zio for the DKIOCFLUSHWRITECACHE commands
+ * that are sent to the vdevs upon completion of the lwb zio.
*
- * Only 2 condition variables are needed and there's no transition
- * between the two cvs needed. They just flip-flop between qthreads
- * and cthreads.
+ * 2. When the itxs are inserted into the ZIL's queue of uncommitted
+ * itxs, the order in which they are inserted is preserved[*]; as
+ * itxs are added to the queue, they are added to the tail of
+ * in-memory linked lists.
*
- * Using this scheme we can efficiently wakeup up only those threads
- * that have been committed.
+ * When committing the itxs to lwbs (to be written to disk), they
+ * are committed in the same order in which the itxs were added to
+ * the uncommitted queue's linked list(s); i.e. the linked list of
+ * itxs to commit is traversed from head to tail, and each itx is
+ * committed to an lwb in that order.
+ *
+ * * To clarify:
+ *
+ * - the order of "sync" itxs is preserved w.r.t. other
+ * "sync" itxs, regardless of the corresponding objects.
+ * - the order of "async" itxs is preserved w.r.t. other
+ * "async" itxs corresponding to the same object.
+ * - the order of "async" itxs is *not* preserved w.r.t. other
+ * "async" itxs corresponding to different objects.
+ * - the order of "sync" itxs w.r.t. "async" itxs (or vice
+ * versa) is *not* preserved, even for itxs that correspond
+ * to the same object.
+ *
+ * For more details, see: zil_itx_assign(), zil_async_to_sync(),
+ * zil_get_commit_list(), and zil_process_commit_list().
+ *
+ * 3. The lwbs represent a linked list of blocks on disk. Thus, any
+ * lwb cannot be considered committed to stable storage, until its
+ * "previous" lwb is also committed to stable storage. This fact,
+ * coupled with the fact described above, means that itxs are
+ * committed in (roughly) the order in which they were generated.
+ * This is essential because itxs are dependent on prior itxs.
+ * Thus, we *must not* deem an itx as being committed to stable
+ * storage, until *all* prior itxs have also been committed to
+ * stable storage.
+ *
+ * To enforce this ordering of lwb zio's, while still leveraging as
+ * much of the underlying storage performance as possible, we rely
+ * on two fundamental concepts:
+ *
+ * 1. The creation and issuance of lwb zio's is protected by
+ * the zilog's "zl_writer_lock", which ensures only a single
+ * thread is creating and/or issuing lwb's at a time
+ * 2. The "previous" lwb is a child of the "current" lwb
+ * (leveraging the zio parent-child depenency graph)
+ *
+ * By relying on this parent-child zio relationship, we can have
+ * many lwb zio's concurrently issued to the underlying storage,
+ * but the order in which they complete will be the same order in
+ * which they were created.
*/
void
zil_commit(zilog_t *zilog, uint64_t foid)
{
- uint64_t mybatch;
+ /*
+ * We should never attempt to call zil_commit on a snapshot for
+ * a couple of reasons:
+ *
+ * 1. A snapshot may never be modified, thus it cannot have any
+ * in-flight itxs that would have modified the dataset.
+ *
+ * 2. By design, when zil_commit() is called, a commit itx will
+ * be assigned to this zilog; as a result, the zilog will be
+ * dirtied. We must not dirty the zilog of a snapshot; there's
+ * checks in the code that enforce this invariant, and will
+ * cause a panic if it's not upheld.
+ */
+ ASSERT3B(dmu_objset_is_snapshot(zilog->zl_os), ==, B_FALSE);
if (zilog->zl_sync == ZFS_SYNC_DISABLED)
return;
+ if (!spa_writeable(zilog->zl_spa)) {
+ /*
+ * If the SPA is not writable, there should never be any
+ * pending itxs waiting to be committed to disk. If that
+ * weren't true, we'd skip writing those itxs out, and
+ * would break the sematics of zil_commit(); thus, we're
+ * verifying that truth before we return to the caller.
+ */
+ ASSERT(list_is_empty(&zilog->zl_lwb_list));
+ ASSERT3P(zilog->zl_last_lwb_opened, ==, NULL);
+ for (int i = 0; i < TXG_SIZE; i++)
+ ASSERT3P(zilog->zl_itxg[i].itxg_itxs, ==, NULL);
+ return;
+ }
+
+ /*
+ * If the ZIL is suspended, we don't want to dirty it by calling
+ * zil_commit_itx_assign() below, nor can we write out
+ * lwbs like would be done in zil_commit_write(). Thus, we
+ * simply rely on txg_wait_synced() to maintain the necessary
+ * semantics, and avoid calling those functions altogether.
+ */
+ if (zilog->zl_suspend > 0) {
+ txg_wait_synced(zilog->zl_dmu_pool, 0);
+ return;
+ }
+
ZIL_STAT_BUMP(zil_commit_count);
- /* move the async itxs for the foid to the sync queues */
+ /*
+ * Move the "async" itxs for the specified foid to the "sync"
+ * queues, such that they will be later committed (or skipped)
+ * to an lwb when zil_process_commit_list() is called.
+ *
+ * Since these "async" itxs must be committed prior to this
+ * call to zil_commit returning, we must perform this operation
+ * before we call zil_commit_itx_assign().
+ */
zil_async_to_sync(zilog, foid);
- mutex_enter(&zilog->zl_lock);
- mybatch = zilog->zl_next_batch;
- while (zilog->zl_writer) {
- cv_wait(&zilog->zl_cv_batch[mybatch & 1], &zilog->zl_lock);
- if (mybatch <= zilog->zl_com_batch) {
- mutex_exit(&zilog->zl_lock);
- return;
- }
- }
-
- zilog->zl_next_batch++;
- zilog->zl_writer = B_TRUE;
- ZIL_STAT_BUMP(zil_commit_writer_count);
- zil_commit_writer(zilog);
- zilog->zl_com_batch = mybatch;
- zilog->zl_writer = B_FALSE;
+ /*
+ * We allocate a new "waiter" structure which will initially be
+ * linked to the commit itx using the itx's "itx_private" field.
+ * Since the commit itx doesn't represent any on-disk state,
+ * when it's committed to an lwb, rather than copying the its
+ * lr_t into the lwb's buffer, the commit itx's "waiter" will be
+ * added to the lwb's list of waiters. Then, when the lwb is
+ * committed to stable storage, each waiter in the lwb's list of
+ * waiters will be marked "done", and signalled.
+ *
+ * We must create the waiter and assign the commit itx prior to
+ * calling zil_commit_writer(), or else our specific commit itx
+ * is not guaranteed to be committed to an lwb prior to calling
+ * zil_commit_waiter().
+ */
+ zil_commit_waiter_t *zcw = zil_alloc_commit_waiter();
+ zil_commit_itx_assign(zilog, zcw);
- /* wake up one thread to become the next writer */
- cv_signal(&zilog->zl_cv_batch[(mybatch+1) & 1]);
+ zil_commit_writer(zilog, zcw);
+ zil_commit_waiter(zilog, zcw);
- /* wake up all threads waiting for this batch to be committed */
- cv_broadcast(&zilog->zl_cv_batch[mybatch & 1]);
+ if (zcw->zcw_zio_error != 0) {
+ /*
+ * If there was an error writing out the ZIL blocks that
+ * this thread is waiting on, then we fallback to
+ * relying on spa_sync() to write out the data this
+ * thread is waiting on. Obviously this has performance
+ * implications, but the expectation is for this to be
+ * an exceptional case, and shouldn't occur often.
+ */
+ DTRACE_PROBE2(zil__commit__io__error,
+ zilog_t *, zilog, zil_commit_waiter_t *, zcw);
+ txg_wait_synced(zilog->zl_dmu_pool, 0);
+ }
- mutex_exit(&zilog->zl_lock);
+ zil_free_commit_waiter(zcw);
}
/*
@@ -1799,12 +2790,9 @@ zil_sync(zilog_t *zilog, dmu_tx_t *tx)
zh->zh_log = lwb->lwb_blk;
if (lwb->lwb_buf != NULL || lwb->lwb_max_txg > txg)
break;
-
- ASSERT(lwb->lwb_zio == NULL);
-
list_remove(&zilog->zl_lwb_list, lwb);
- zio_free_zil(spa, txg, &lwb->lwb_blk);
- kmem_cache_free(zil_lwb_cache, lwb);
+ zio_free(spa, txg, &lwb->lwb_blk);
+ zil_free_lwb(zilog, lwb);
/*
* If we don't have anything left in the lwb list then
@@ -1822,7 +2810,7 @@ zil_sync(zilog_t *zilog, dmu_tx_t *tx)
* unused, long-lived LWBs.
*/
for (; lwb != NULL; lwb = list_next(&zilog->zl_lwb_list, lwb)) {
- if (lwb->lwb_fastwrite && !lwb->lwb_zio) {
+ if (lwb->lwb_fastwrite && !lwb->lwb_write_zio) {
metaslab_fastwrite_unmark(zilog->zl_spa, &lwb->lwb_blk);
lwb->lwb_fastwrite = 0;
}
@@ -1831,11 +2819,39 @@ zil_sync(zilog_t *zilog, dmu_tx_t *tx)
mutex_exit(&zilog->zl_lock);
}
+/* ARGSUSED */
+static int
+zil_lwb_cons(void *vbuf, void *unused, int kmflag)
+{
+ lwb_t *lwb = vbuf;
+ list_create(&lwb->lwb_itxs, sizeof (itx_t), offsetof(itx_t, itx_node));
+ list_create(&lwb->lwb_waiters, sizeof (zil_commit_waiter_t),
+ offsetof(zil_commit_waiter_t, zcw_node));
+ avl_create(&lwb->lwb_vdev_tree, zil_lwb_vdev_compare,
+ sizeof (zil_vdev_node_t), offsetof(zil_vdev_node_t, zv_node));
+ mutex_init(&lwb->lwb_vdev_lock, NULL, MUTEX_DEFAULT, NULL);
+ return (0);
+}
+
+/* ARGSUSED */
+static void
+zil_lwb_dest(void *vbuf, void *unused)
+{
+ lwb_t *lwb = vbuf;
+ mutex_destroy(&lwb->lwb_vdev_lock);
+ avl_destroy(&lwb->lwb_vdev_tree);
+ list_destroy(&lwb->lwb_waiters);
+ list_destroy(&lwb->lwb_itxs);
+}
+
void
zil_init(void)
{
zil_lwb_cache = kmem_cache_create("zil_lwb_cache",
- sizeof (struct lwb), 0, NULL, NULL, NULL, NULL, NULL, 0);
+ sizeof (lwb_t), 0, zil_lwb_cons, zil_lwb_dest, NULL, NULL, NULL, 0);
+
+ zil_zcw_cache = kmem_cache_create("zil_zcw_cache",
+ sizeof (zil_commit_waiter_t), 0, NULL, NULL, NULL, NULL, NULL, 0);
zil_ksp = kstat_create("zfs", 0, "zil", "misc",
KSTAT_TYPE_NAMED, sizeof (zil_stats) / sizeof (kstat_named_t),
@@ -1850,6 +2866,7 @@ zil_init(void)
void
zil_fini(void)
{
+ kmem_cache_destroy(zil_zcw_cache);
kmem_cache_destroy(zil_lwb_cache);
if (zil_ksp != NULL) {
@@ -1884,9 +2901,12 @@ zil_alloc(objset_t *os, zil_header_t *zh_phys)
zilog->zl_destroy_txg = TXG_INITIAL - 1;
zilog->zl_logbias = dmu_objset_logbias(os);
zilog->zl_sync = dmu_objset_syncprop(os);
- zilog->zl_next_batch = 1;
+ zilog->zl_dirty_max_txg = 0;
+ zilog->zl_last_lwb_opened = NULL;
+ zilog->zl_last_lwb_latency = 0;
mutex_init(&zilog->zl_lock, NULL, MUTEX_DEFAULT, NULL);
+ mutex_init(&zilog->zl_writer_lock, NULL, MUTEX_DEFAULT, NULL);
for (int i = 0; i < TXG_SIZE; i++) {
mutex_init(&zilog->zl_itxg[i].itxg_lock, NULL,
@@ -1899,15 +2919,7 @@ zil_alloc(objset_t *os, zil_header_t *zh_phys)
list_create(&zilog->zl_itx_commit_list, sizeof (itx_t),
offsetof(itx_t, itx_node));
- mutex_init(&zilog->zl_vdev_lock, NULL, MUTEX_DEFAULT, NULL);
-
- avl_create(&zilog->zl_vdev_tree, zil_vdev_compare,
- sizeof (zil_vdev_node_t), offsetof(zil_vdev_node_t, zv_node));
-
- cv_init(&zilog->zl_cv_writer, NULL, CV_DEFAULT, NULL);
cv_init(&zilog->zl_cv_suspend, NULL, CV_DEFAULT, NULL);
- cv_init(&zilog->zl_cv_batch[0], NULL, CV_DEFAULT, NULL);
- cv_init(&zilog->zl_cv_batch[1], NULL, CV_DEFAULT, NULL);
return (zilog);
}
@@ -1925,9 +2937,6 @@ zil_free(zilog_t *zilog)
ASSERT(list_is_empty(&zilog->zl_lwb_list));
list_destroy(&zilog->zl_lwb_list);
- avl_destroy(&zilog->zl_vdev_tree);
- mutex_destroy(&zilog->zl_vdev_lock);
-
ASSERT(list_is_empty(&zilog->zl_itx_commit_list));
list_destroy(&zilog->zl_itx_commit_list);
@@ -1944,12 +2953,10 @@ zil_free(zilog_t *zilog)
mutex_destroy(&zilog->zl_itxg[i].itxg_lock);
}
+ mutex_destroy(&zilog->zl_writer_lock);
mutex_destroy(&zilog->zl_lock);
- cv_destroy(&zilog->zl_cv_writer);
cv_destroy(&zilog->zl_cv_suspend);
- cv_destroy(&zilog->zl_cv_batch[0]);
- cv_destroy(&zilog->zl_cv_batch[1]);
kmem_free(zilog, sizeof (zilog_t));
}
@@ -1962,7 +2969,8 @@ zil_open(objset_t *os, zil_get_data_t *get_data)
{
zilog_t *zilog = dmu_objset_zil(os);
- ASSERT(zilog->zl_get_data == NULL);
+ ASSERT3P(zilog->zl_get_data, ==, NULL);
+ ASSERT3P(zilog->zl_last_lwb_opened, ==, NULL);
ASSERT(list_is_empty(&zilog->zl_lwb_list));
zilog->zl_get_data = get_data;
@@ -1977,22 +2985,30 @@ void
zil_close(zilog_t *zilog)
{
lwb_t *lwb;
- uint64_t txg = 0;
+ uint64_t txg;
- zil_commit(zilog, 0); /* commit all itx */
+ if (!dmu_objset_is_snapshot(zilog->zl_os)) {
+ zil_commit(zilog, 0);
+ } else {
+ ASSERT3P(list_tail(&zilog->zl_lwb_list), ==, NULL);
+ ASSERT0(zilog->zl_dirty_max_txg);
+ ASSERT3B(zilog_is_dirty(zilog), ==, B_FALSE);
+ }
- /*
- * The lwb_max_txg for the stubby lwb will reflect the last activity
- * for the zil. After a txg_wait_synced() on the txg we know all the
- * callbacks have occurred that may clean the zil. Only then can we
- * destroy the zl_clean_taskq.
- */
mutex_enter(&zilog->zl_lock);
lwb = list_tail(&zilog->zl_lwb_list);
- if (lwb != NULL)
- txg = lwb->lwb_max_txg;
+ if (lwb == NULL)
+ txg = zilog->zl_dirty_max_txg;
+ else
+ txg = MAX(zilog->zl_dirty_max_txg, lwb->lwb_max_txg);
mutex_exit(&zilog->zl_lock);
- if (txg)
+
+ /*
+ * We need to use txg_wait_synced() to wait long enough for the
+ * ZIL to be clean, and to wait for all pending lwbs to be
+ * written out.
+ */
+ if (txg != 0)
txg_wait_synced(zilog->zl_dmu_pool, txg);
if (zilog_is_dirty(zilog))
@@ -2003,18 +3019,20 @@ zil_close(zilog_t *zilog)
zilog->zl_get_data = NULL;
/*
- * We should have only one LWB left on the list; remove it now.
+ * We should have only one lwb left on the list; remove it now.
*/
mutex_enter(&zilog->zl_lock);
lwb = list_head(&zilog->zl_lwb_list);
if (lwb != NULL) {
- ASSERT(lwb == list_tail(&zilog->zl_lwb_list));
- ASSERT(lwb->lwb_zio == NULL);
+ ASSERT3P(lwb, ==, list_tail(&zilog->zl_lwb_list));
+ ASSERT3S(lwb->lwb_state, !=, LWB_STATE_ISSUED);
+
if (lwb->lwb_fastwrite)
metaslab_fastwrite_unmark(zilog->zl_spa, &lwb->lwb_blk);
+
list_remove(&zilog->zl_lwb_list, lwb);
zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
- kmem_cache_free(zil_lwb_cache, lwb);
+ zil_free_lwb(zilog, lwb);
}
mutex_exit(&zilog->zl_lock);
}
@@ -2376,7 +3394,7 @@ EXPORT_SYMBOL(zil_sync);
EXPORT_SYMBOL(zil_clean);
EXPORT_SYMBOL(zil_suspend);
EXPORT_SYMBOL(zil_resume);
-EXPORT_SYMBOL(zil_add_block);
+EXPORT_SYMBOL(zil_lwb_add_block);
EXPORT_SYMBOL(zil_bp_tree_add);
EXPORT_SYMBOL(zil_set_sync);
EXPORT_SYMBOL(zil_set_logbias);
diff --git a/module/zfs/zio.c b/module/zfs/zio.c
index 311f79e23..92e5a8dd8 100644
--- a/module/zfs/zio.c
+++ b/module/zfs/zio.c
@@ -568,7 +568,7 @@ zio_add_child(zio_t *pio, zio_t *cio)
* Vdev I/Os can only have vdev children.
* The following ASSERT captures all of these constraints.
*/
- ASSERT(cio->io_child_type <= pio->io_child_type);
+ ASSERT3S(cio->io_child_type, <=, pio->io_child_type);
zl->zl_parent = pio;
zl->zl_child = cio;
@@ -1281,9 +1281,9 @@ zio_flush(zio_t *zio, vdev_t *vd)
void
zio_shrink(zio_t *zio, uint64_t size)
{
- ASSERT(zio->io_executor == NULL);
- ASSERT(zio->io_orig_size == zio->io_size);
- ASSERT(size <= zio->io_size);
+ ASSERT3P(zio->io_executor, ==, NULL);
+ ASSERT3U(zio->io_orig_size, ==, zio->io_size);
+ ASSERT3U(size, <=, zio->io_size);
/*
* We don't shrink for raidz because of problems with the
@@ -1877,8 +1877,8 @@ zio_wait(zio_t *zio)
{
int error;
- ASSERT(zio->io_stage == ZIO_STAGE_OPEN);
- ASSERT(zio->io_executor == NULL);
+ ASSERT3S(zio->io_stage, ==, ZIO_STAGE_OPEN);
+ ASSERT3P(zio->io_executor, ==, NULL);
zio->io_waiter = curthread;
ASSERT0(zio->io_queued_timestamp);
@@ -1900,7 +1900,7 @@ zio_wait(zio_t *zio)
void
zio_nowait(zio_t *zio)
{
- ASSERT(zio->io_executor == NULL);
+ ASSERT3P(zio->io_executor, ==, NULL);
if (zio->io_child_type == ZIO_CHILD_LOGICAL &&
zio_unique_parent(zio) == NULL) {
@@ -1926,7 +1926,7 @@ zio_nowait(zio_t *zio)
/*
* ==========================================================================
- * Reexecute or suspend/resume failed I/O
+ * Reexecute, cancel, or suspend/resume failed I/O
* ==========================================================================
*/
@@ -1984,6 +1984,20 @@ zio_reexecute(zio_t *pio)
}
void
+zio_cancel(zio_t *zio)
+{
+ /*
+ * Disallow cancellation of a zio that's already been issued.
+ */
+ VERIFY3P(zio->io_executor, ==, NULL);
+
+ zio->io_pipeline = ZIO_INTERLOCK_PIPELINE;
+ zio->io_done = NULL;
+
+ zio_nowait(zio);
+}
+
+void
zio_suspend(spa_t *spa, zio_t *zio)
{
if (spa_get_failmode(spa) == ZIO_FAILURE_MODE_PANIC)
@@ -3276,6 +3290,9 @@ zio_alloc_zil(spa_t *spa, objset_t *os, uint64_t txg, blkptr_t *new_bp,
zio_crypt_encode_params_bp(new_bp, salt, iv);
}
+ } else {
+ zfs_dbgmsg("%s: zil block allocation failure: "
+ "size %llu, error %d", spa_name(spa), size, error);
}
return (error);
diff --git a/module/zfs/zvol.c b/module/zfs/zvol.c
index 5cd190c09..6ea822467 100644
--- a/module/zfs/zvol.c
+++ b/module/zfs/zvol.c
@@ -1052,7 +1052,7 @@ zvol_get_done(zgd_t *zgd, int error)
zfs_range_unlock(zgd->zgd_rl);
if (error == 0 && zgd->zgd_bp)
- zil_add_block(zgd->zgd_zilog, zgd->zgd_bp);
+ zil_lwb_add_block(zgd->zgd_lwb, zgd->zgd_bp);
kmem_free(zgd, sizeof (zgd_t));
}
@@ -1061,7 +1061,7 @@ zvol_get_done(zgd_t *zgd, int error)
* Get data to generate a TX_WRITE intent log record.
*/
static int
-zvol_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
+zvol_get_data(void *arg, lr_write_t *lr, char *buf, struct lwb *lwb, zio_t *zio)
{
zvol_state_t *zv = arg;
uint64_t offset = lr->lr_offset;
@@ -1070,11 +1070,12 @@ zvol_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
zgd_t *zgd;
int error;
- ASSERT(zio != NULL);
- ASSERT(size != 0);
+ ASSERT3P(lwb, !=, NULL);
+ ASSERT3P(zio, !=, NULL);
+ ASSERT3U(size, !=, 0);
zgd = (zgd_t *)kmem_zalloc(sizeof (zgd_t), KM_SLEEP);
- zgd->zgd_zilog = zv->zv_zilog;
+ zgd->zgd_lwb = lwb;
/*
* Write records come in two flavors: immediate and indirect.
diff --git a/scripts/zloop.sh b/scripts/zloop.sh
index a45c45024..afcd87f36 100755
--- a/scripts/zloop.sh
+++ b/scripts/zloop.sh
@@ -254,7 +254,7 @@ while [[ $timeout -eq 0 ]] || [[ $curtime -le $((starttime + timeout)) ]]; do
$cmd >>ztest.out 2>&1
ztrc=$?
egrep '===|WARNING' ztest.out >>ztest.history
- $ZDB -U "$workdir/zpool.cache" -DD ztest >>ztest.ddt
+ $ZDB -U "$workdir/zpool.cache" -DD ztest >>ztest.ddt 2>&1
store_core