diff options
author | Tom Caputi <[email protected]> | 2020-01-14 15:25:20 -0500 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2020-01-14 12:25:20 -0800 |
commit | 61152d1069595db08f9b53ee518683382caf313e (patch) | |
tree | 94c02406bced0e296a795506f504361aab935529 /module | |
parent | f744f36ce583ed27dcfcda93ecd0af1df994a891 (diff) |
Fix errata #4 handling for resuming streams
Currently, the handling for errata #4 has two issues which allow
the checks for this issue to be bypassed using resumable sends.
The first issue is that drc->drc_fromsnapobj is not set in the
resuming code as it is in the non-resuming code. This causes
dsl_crypto_recv_key_check() to skip its checks for the
from_ivset_guid. The second issue is that resumable sends do not
clean up their on-disk state if they fail the checks in
dmu_recv_stream() that happen before any data is received.
As a result of these two bugs, a user can attempt a resumable send
of a dataset without a from_ivset_guid. This will fail the initial
dmu_recv_stream() checks, leaving a valid resume state. The send
can then be resumed, which skips those checks, allowing the receive
to be completed.
This commit fixes these issues by setting drc->drc_fromsnapobj in
the resuming receive path and by ensuring that resumablereceives
are properly cleaned up if they fail the initial dmu_recv_stream()
checks.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tom Caputi <[email protected]>
Closes #9818
Closes #9829
Diffstat (limited to 'module')
-rw-r--r-- | module/zfs/dmu_recv.c | 14 |
1 files changed, 13 insertions, 1 deletions
diff --git a/module/zfs/dmu_recv.c b/module/zfs/dmu_recv.c index 6f3545b7e..46a42197b 100644 --- a/module/zfs/dmu_recv.c +++ b/module/zfs/dmu_recv.c @@ -1018,6 +1018,9 @@ dmu_recv_resume_begin_check(void *arg, dmu_tx_t *tx) return (SET_ERROR(EINVAL)); } + if (ds->ds_prev != NULL) + drc->drc_fromsnapobj = ds->ds_prev->ds_object; + /* * If we're resuming, and the send is redacted, then the original send * must have been redacted, and must have been redacted with respect to @@ -1092,6 +1095,7 @@ dmu_recv_resume_begin_sync(void *arg, dmu_tx_t *tx) rrw_exit(&ds->ds_bp_rwlock, FTAG); drba->drba_cookie->drc_ds = ds; + drba->drba_cookie->drc_should_save = B_TRUE; spa_history_log_internal_ds(ds, "resume receive", tx, " "); } @@ -2068,7 +2072,8 @@ dmu_recv_cleanup_ds(dmu_recv_cookie_t *drc) ds->ds_objset->os_raw_receive = B_FALSE; rrw_enter(&ds->ds_bp_rwlock, RW_READER, FTAG); - if (drc->drc_resumable && !BP_IS_HOLE(dsl_dataset_get_blkptr(ds))) { + if (drc->drc_resumable && drc->drc_should_save && + !BP_IS_HOLE(dsl_dataset_get_blkptr(ds))) { rrw_exit(&ds->ds_bp_rwlock, FTAG); dsl_dataset_disown(ds, dsflags, dmu_recv_tag); } else { @@ -2738,6 +2743,13 @@ dmu_recv_stream(dmu_recv_cookie_t *drc, int cleanup_fd, goto out; } + /* + * If we failed before this point we will clean up any new resume + * state that was created. Now that we've gotten past the initial + * checks we are ok to retain that resume state. + */ + drc->drc_should_save = B_TRUE; + (void) bqueue_init(&rwa->q, zfs_recv_queue_ff, MAX(zfs_recv_queue_length, 2 * zfs_max_recordsize), offsetof(struct receive_record_arg, node)); |