summaryrefslogtreecommitdiffstats
path: root/module
Commit message (Collapse)AuthorAgeFilesLines
* zfs initialize performance enhancementsGeorge Wilson2019-01-076-76/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM ======== When invoking "zpool initialize" on a pool the command will create a thread to initialize each disk. Unfortunately, it does this serially across many transaction groups which can result in commands taking a long time to return to the user and may appear hung. The same thing is true when trying to suspend/cancel the operation. SOLUTION ========= This change refactors the way we invoke the initialize interface to ensure we can start or stop the intialization in just a few transaction groups. When stopping or cancelling a vdev initialization perform it in two phases. First signal each vdev initialization thread that it should exit, then after all threads have been signaled wait for them to exit. On a pool with 40 leaf vdevs this reduces the vdev initialize stop/cancel time from ~10 minutes to under a second. The reason for this is spa_vdev_initialize() no longer needs to wait on multiple full TXGs per leaf vdev being stopped. This commit additionally adds some missing checks for the passed "initialize_vdevs" input nvlist. The contents of the user provided input "initialize_vdevs" nvlist must be validated to ensure all values are uint64s. This is done in zfs_ioc_pool_initialize() in order to keep all of these checks in a single location. Updated the innvl and outnvl comments to match the formatting used for all other new sytle ioctls. Reviewed by: Matt Ahrens <[email protected]> Reviewed-by: loli10K <[email protected]> Reviewed-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: George Wilson <[email protected]> Closes #8230
* OpenZFS 9102 - zfs should be able to initialize storage devicesGeorge Wilson2019-01-0717-26/+1254
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROBLEM ======== The first access to a block incurs a performance penalty on some platforms (e.g. AWS's EBS, VMware VMDKs). Therefore we recommend that volumes are "thick provisioned", where supported by the platform (VMware). This can create a large delay in getting a new virtual machines up and running (or adding storage to an existing Engine). If the thick provision step is omitted, write performance will be suboptimal until all blocks on the LUN have been written. SOLUTION ========= This feature introduces a way to 'initialize' the disks at install or in the background to make sure we don't incur this first read penalty. When an entire LUN is added to ZFS, we make all space available immediately, and allow ZFS to find unallocated space and zero it out. This works with concurrent writes to arbitrary offsets, ensuring that we don't zero out something that has been (or is in the middle of being) written. This scheme can also be applied to existing pools (affecting only free regions on the vdev). Detailed design: - new subcommand:zpool initialize [-cs] <pool> [<vdev> ...] - start, suspend, or cancel initialization - Creates new open-context thread for each vdev - Thread iterates through all metaslabs in this vdev - Each metaslab: - select a metaslab - load the metaslab - mark the metaslab as being zeroed - walk all free ranges within that metaslab and translate them to ranges on the leaf vdev - issue a "zeroing" I/O on the leaf vdev that corresponds to a free range on the metaslab we're working on - continue until all free ranges for this metaslab have been "zeroed" - reset/unmark the metaslab being zeroed - if more metaslabs exist, then repeat above tasks. - if no more metaslabs, then we're done. - progress for the initialization is stored on-disk in the vdev’s leaf zap object. The following information is stored: - the last offset that has been initialized - the state of the initialization process (i.e. active, suspended, or canceled) - the start time for the initialization - progress is reported via the zpool status command and shows information for each of the vdevs that are initializing Porting notes: - Added zfs_initialize_value module parameter to set the pattern written by "zpool initialize". - Added zfs_vdev_{initializing,removal}_{min,max}_active module options. Authored by: George Wilson <[email protected]> Reviewed by: John Wren Kennedy <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: Pavel Zakharov <[email protected]> Reviewed by: Prakash Surya <[email protected]> Reviewed by: loli10K <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> Approved by: Richard Lowe <[email protected]> Signed-off-by: Tim Chase <[email protected]> Ported-by: Tim Chase <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9102 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c3963210eb Closes #8230
* Add zfs module feature and property compatibilityBrian Behlendorf2019-01-031-6/+30
| | | | | | | | | This change is required to ease the transition when upgrading from 0.7.x to 0.8.x. It allows 0.8.x user space utilities to remain compatible with 0.7.x and older kernel modules. Reviewed-by: Don Brady <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #8231
* Fix 'zpool remap' freeing raceBrian Behlendorf2019-01-021-10/+24
| | | | | | | | | | | | | | | | | | | | | | | | The dmu_objset_remap_indirects_impl() logic depends on dnode_hold() returning ENOENT for dnodes which will be freed and should be skipped. This behavior can only be relied upon when taking a new hold and while the caller has an open transaction. This ensures that the open txg cannot advance and that a concurrent free will end up in the same txg (which is critical). Relying on an existing hold will not prevent dnode_free() from succeeding. The solution is to take an additional dnode_hold() after assigning the transaction. This ensures the remap will never dirty the dnode if it was freed while we were waiting in dmu_tx_assign(, TXG_WAIT). Randomly set zfs_object_remap_one_indirect_delay_ms in ztest. This increases the likelihood of an operation racing with the remap. Converted from ticks to milliseconds. Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Tom Caputi <[email protected]> Reviewed by: Igor Kozhukhov <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #8215
* OpenZFS 9284 - arc_reclaim_thread has 2 jobsBrad Lewis2018-12-263-172/+272
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Following the fix for 9018 (Replace kmem_cache_reap_now() with kmem_cache_reap_soon), the arc_reclaim_thread() no longer blocks while reaping. However, the code is still confusing and error-prone, because this thread has two responsibilities. We should instead separate this into two threads each with their own responsibility: 1. keep `arc_size` under `arc_c`, by calling `arc_adjust()`, which improves `arc_is_overflowing()` 2. keep enough free memory in the system, by calling `arc_kmem_reap_now()` plus `arc_shrink()`, which improves `arc_available_memory()`. Furthermore, we can use the zthr infrastructure to separate the "should we do something" from "do it" parts of the logic, and normalize the start up / shut down of the threads. Authored by: Brad Lewis <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Serapheim Dimitropoulos <[email protected]> Reviewed by: Pavel Zakharov <[email protected]> Reviewed by: Dan Kimmel <[email protected]> Reviewed by: Paul Dagnelie <[email protected]> Reviewed by: Dan McDonald <[email protected]> Reviewed by: Tim Kordas <[email protected]> Reviewed by: Tim Chase <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> Ported-by: Brad Lewis <[email protected]> Signed-off-by: Brad Lewis <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9284 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/de753e34f9 Closes #8165
* Fix zfs_dirty_data_sync_percent documentationTom Caputi2018-12-181-2/+3
| | | | | | | | | | | In dfbe2675 zfs_dirty_data_sync was changed to a new tunable named zfs_dirty_data_sync_percent. Unfortunately, the module parameter documentation is the code was not updated accordingly. This patch simply corrects that. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8212
* Fix zap_update() ASSERT from ztestTom Caputi2018-12-141-12/+0
| | | | | | | | | | | This patch simply removes an invalid assert from the zap_update() function. The ASSERT is invalid because it does not hold the zap lock from the time it fetches the old value to the time it confirms that it is what it should be. Reviewed by: Matt Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8209
* OpenZFS 9630 - add lzc_rename and lzc_destroy to libzfs_coreAndriy Gapon2018-12-141-3/+19
| | | | | | | | | | | | | | | | | | | | Porting Notes: * Additional changes to recv_rename_impl() were required due to encryption code not being merged in OpenZFS yet. * libzfs_core python bindings (pyzfs) were updated to fully support both lzc_rename() and lzc_destroy() Authored by: Andriy Gapon <[email protected]> Reviewed by: Andy Stormont <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Serapheim Dimitropoulos <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> Approved by: Dan McDonald <[email protected]> Ported-by: loli10K <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9630 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/049ba63 Closes #8207
* Fix resilver writes in vdev_indirect_io_startTom Caputi2018-12-131-8/+15
| | | | | | | | | | | | | This patch addresses an issue found in ztest where resilver write zios that were passed to an indirect vdev would end up being handled as though they were resilver read zios. This caused issues where the zio->io_abd would be both read to and written from at the same time, causing asserts to fail. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8193
* Seeing negative values for wlentime and rlentimeRichard Elling2018-12-111-3/+4
| | | | | | | | | | | | | | | | Linux kstat IO and TIMER printed values as signed. However the counters only increment. Thus humans looking at the data can be confused when the counters roll over. Note: The recommended use of these values is to monitor the derivative, which don't really care about the sign. See explanations related to non-negative derivatives in the various time-series databases. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Richard Elling <[email protected]> Closes #8131 Closes #8198
* Rename macro ZFS_MINOR due to Lustre conflictOlaf Faaland2018-12-101-3/+3
| | | | | | | | | | | | | | | | | Macro ZFS_MINOR, introduced in commit a6cc9756 to record the chosen static minor number for /dev/zfs, conflicts with an existing macro in Lustre. The lustre macro (along with _MAJOR, _PATCH, _FIX) is used to record the zfsonlinux version Lustre is being built against. Since the Lustre macro came first, and is used in past versions of lustre at least going back to 2.10, it makes sense to rename the macro in ZFS instead of doing so in Lustre which would require backporting the patch. Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Olaf Faaland <[email protected]> Closes #8195
* OpenZFS 9962 - zil_commit should omit cache thrashPrakash Surya2018-12-074-69/+197
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As a result of the changes made in 8585, it's possible for an excessive amount of vdev flush commands to be issued under some workloads. Specifically, when the workload consists of mostly async write activity, interspersed with some sync write and/or fsync activity, we can end up issuing more flush commands to the underlying storage than is actually necessary. As a result of these flush commands, the write latency and overall throughput of the pool can be poorly impacted (latency increases, throughput decreases). Currently, any time an lwb completes, the vdev(s) written to as a result of that lwb will be issued a flush command. The intenion is so the data written to that vdev is on stable storage, prior to communicating to any waiting threads that their data is safe on disk. The problem with this scheme, is that sometimes an lwb will not have any threads waiting for it to complete. This can occur when there's async activity that gets "converted" to sync requests, as a result of calling the zil_async_to_sync() function via zil_commit_impl(). When this occurs, the current code may issue many lwbs that don't have waiters associated with them, resulting in many flush commands, potentially to the same vdev(s). For example, given a pool with a single vdev, and a single fsync() call that results in 10 lwbs being written out (e.g. due to other async writes), that will result in 10 flush commands to that single vdev (a flush issued after each lwb write completes). Ideally, we'd only issue a single flush command to that vdev, after all 10 lwb writes completed. Further, and most important as it pertains to this change, since the flush commands are often very impactful to the performance of the pool's underlying storage, unnecessarily issuing these flush commands can poorly impact the performance of the lwb writes themselves. Thus, we need to avoid issuing flush commands when possible, in order to acheive the best possible performance out of the pool's underlying storage. This change attempts to address this problem by changing the ZIL's logic to only issue a vdev flush command when it detects an lwb that has a thread waiting for it to complete. When an lwb does not have threads waiting for it, the responsibility of issuing the flush command to the vdevs involved with that lwb's write is passed on to the "next" lwb. It's only once a write for an lwb with waiters completes, do we issue the vdev flush command(s). As a result, now when we issue the flush(s), we will issue them to the vdevs involved with that specific lwb's write, but potentially also to vdevs involved with "previous" lwb writes (i.e. if the previous lwbs did not have waiters associated with them). Thus, in our prior example with 10 lwbs, it's only once the last lwb completes (which will be the lwb containing the waiter for the thread that called fsync) will we issue the vdev flush command; all of the other lwbs will find they have no waiters, so they'll pass the responsibility of the flush to the "next" lwb (until reaching the last lwb that has the waiter). Porting Notes: * Reconciled conflicts with the fastwrite feature. Authored by: Prakash Surya <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Brad Lewis <[email protected]> Reviewed by: Patrick Mooney <[email protected]> Reviewed by: Jerry Jelinek <[email protected]> Approved by: Joshua M. Clulow <[email protected]> Ported-by: Signed-off-by: Brian Behlendorf <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9962 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/545190c6 Closes #8188
* OpenZFS 9963 - Separate tunable for disabling ZIL vdev flushPrakash Surya2018-12-072-8/+19
| | | | | | | | | | | | | | | | | | | | Porting Notes: * Add options to zfs-module-parameters(5) man page. * zfs_nocacheflush move to vdev.c instead of vdev_disk.c, since the latter doesn't get built for user space. Authored by: Prakash Surya <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Brad Lewis <[email protected]> Reviewed by: Patrick Mooney <[email protected]> Reviewed by: Tom Caputi <[email protected]> Reviewed by: George Melikov <[email protected]> Approved by: Dan McDonald <[email protected]> Ported-by: Signed-off-by: Brian Behlendorf <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9963 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/f8fdf68125 Closes #8186
* OpenZFS 9993 - zil writes can get delayed in zio pipelineGeorge Wilson2018-12-071-1/+2
| | | | | | | | | | | | | | | Authored by: George Wilson <[email protected]> Reviewed by: Prakash Surya <[email protected]> Reviewed by: Brad Lewis <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Tom Caputi <[email protected]> Reviewed by: George Melikov <[email protected]> Approved by: Dan McDonald <[email protected]> Ported-by: Brian Behlendorf <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9993 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/2258ad0b Closes #8185
* Ensure dsl scan prefetch queue is emptiedTom Caputi2018-12-061-0/+20
| | | | | | | | | This patch simply ensures that scn->scn_prefetch_queue is emptied before the kernel module is unloaded and when scanning completes. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Alek Pinchuk <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8178
* Fix dnode_hold() freeing dnode behaviorBrian Behlendorf2018-12-051-2/+4
| | | | | | | | | | | | | | | | | | | | | Commit 4c5b89f59 refactored dnode_hold() and in the process accidentally introduced a slight change in behavior which was not intended. The required behavior is that once the ZPL, or other consumer, declares its intent to free a dnode then dnode_hold() should immediately start failing. This updated code wouldn't return the failure until after it was freed. When DNODE_MUST_BE_ALLOCATED is set it must return ENOENT, and when DNODE_MUST_BE_FREE is set it must return EEXIST; This issue was uncovered by ztest_remap() which attempted to remap a freeing object which should have been skipped as described by the comment in dmu_objset_remap_indirects_impl(). Reviewed-by: George Melikov <[email protected]> Reviewed-by: Tom Caputi <[email protected]> Reviewed-by: Olaf Faaland <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #8172
* Fix 'zpool list -v' alignmentBrian Behlendorf2018-12-041-0/+6
| | | | | | | | | | | | | | | The verbose output of 'zpool list' was not correctly aligned due to differences in the vdev name lengths. Minimally update the code the correct the alignment using the same strategy employed by 'zpool status'. Missing dashes were added for the empty defaults columns, and the vdev state is now printed for all vdevs. Reviewed-by: Tom Caputi <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #7308 Closes #8147
* Fix ztest deadlock in spa_vdev_remove()Tom Caputi2018-12-041-12/+19
| | | | | | | | | | | | | | This patch corrects an issue where spa_vdev_remove() would call spa_history_log_internal() while holding the spa config lock. This function may decide to block until the next txg if the current one seems too full. However, since the thread is holding the config log, the txg sync thread cannot progress and the system ends up deadlocked. This patch simply moves all calls to spa_history_log_internal() outside of the config lock. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8162
* Detect IO errors during device removalBrian Behlendorf2018-12-041-11/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Detect IO errors during device removal While device removal cannot verify the checksums of individual blocks during device removal, it can reasonably detect hard IO errors from the leaf vdevs. Failure to perform this error checking can result in device removal completing successfully, but moving no data which will permanently corrupt the pool. Situation 1: faulted/degraded vdevs In the configuration shown below, the removal of mirror-0 will permanently corrupt the pool. Device removal will preferentially copy data from 'vdev1 -> vdev3' and from 'vdev2 -> vdev4'. Which in this case will result in nothing being copied since one vdev in each of those groups in unavailable. However, device removal will complete successfully since all IO errors are ignored. tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 /var/tmp/vdev1 FAULTED 0 0 0 external fault /var/tmp/vdev2 ONLINE 0 0 0 mirror-1 DEGRADED 0 0 0 /var/tmp/vdev3 ONLINE 0 0 0 /var/tmp/vdev4 FAULTED 0 0 0 external fault This issue is resolved by updating the source child selection logic to exclude unreadable leaf vdevs. Additionally, unwritable destination child vdevs which can never succeed are skipped to prevent generating a large number of write IO errors. Situation 2: individual hard IO errors During removal if an unexpected hard IO error is encountered when either reading or writing the child vdev the entire removal operation is cancelled. While it may be possible to reconstruct the data after removal that cannot be guaranteed. The only strictly safe thing to do is to cancel the removal. As a future improvement we may want to instead suspend the removal process and allow the damaged region to be retried. But that work is left for another time, hard IO errors during the removal process are expected to be exceptionally rare. Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Tom Caputi <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #6900 Closes #8161
* Fix consistency of ztest_device_removal_activeTom Caputi2018-11-282-9/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | ztest currently uses the boolean flag ztest_device_removal_active to protect some tests that may not run successfully if they occur at the same time as ztest_device_removal(). Unfortunately, in the event that ztest is in the middle of a device removal when it decides to issue a SIGKILL, the device removal will be automatically restarted (without setting the flag) when the pool is re-imported on the next run. This patch corrects this by ensuring that any in-progress removals are completed before running further tests after the re-import. This patch also makes a few small changes to prevent race conditions involving the creation and destruction of spa->spa_vdev_removal, since this field is not protected by any locks. Some checks that may run concurrently with setting / unsetting this field have been updated to check spa->spa_removing_phys.sr_state instead. The most significant change here is that spa_removal_get_stats() no longer accounts for in-flight work done, since that could result in a NULL pointer dereference. Reviewed by: Matthew Ahrens <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8105
* zfs_dbgmsg() is not safe from every contextLOLi2018-11-281-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit reverts to using printk() instead of zfs_dbgmsg() to log messages in vdev_disk_error(): this is necessary because the latter can be called from interrupt context where we are not allowed to sleep. Unfortunately zfs_dbgmsg() performs its allocations calling kmalloc() with the KM_SLEEP flag which may result in the following oops: BUG: scheduling while atomic: swapper/4/0/0x10000100 Call Trace: <IRQ> [<0>] dump_stack+0x19/0x1b ... [<0>] spl_kmem_alloc+0xdf/0x140 [spl] <-- kmem_alloc(size, KM_SLEEP) [<0>] __dprintf+0x69/0x150 [zfs] [<0>] ? kmem_cache_free+0x1e2/0x200 [<0>] vdev_disk_error.part.15+0x5f/0x70 [zfs] [<0>] vdev_disk_io_flush_completion+0x48/0x70 [zfs] [<0>] bio_endio+0x67/0xb0 [<0>] blk_update_request+0x90/0x360 ... [<0>] scsi_finish_command+0xdc/0x140 [<0>] scsi_softirq_done+0x132/0x160 [<0>] blk_done_softirq+0x96/0xc0 [<0>] __do_softirq+0xf5/0x280 [<0>] call_softirq+0x1c/0x30 [<0>] do_softirq+0x65/0xa0 [<0>] irq_exit+0x105/0x110 [<0>] do_IRQ+0x56/0xf0 [<0>] common_interrupt+0x162/0x162 <EOI> [<0>] ? cpuidle_enter_state+0x54/0xd0 [<0>] cpuidle_idle_call+0xde/0x230 [<0>] arch_cpu_idle+0xe/0xb0 [<0>] cpu_startup_entry+0x14a/0x1e0 [<0>] start_secondary+0x1f7/0x270 [<0>] start_cpu+0x5/0x14 Reviewed-by: Olaf Faaland <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> Signed-off-by: loli10K <[email protected]> Closes #8137 Closes #8150
* Remove races from scrub / resilver testsTom Caputi2018-11-282-5/+31
| | | | | | | | | | | | | | | | | | | | | | Currently, several tests in the ZFS Test Suite that attempt to test scrub and resilver behavior occasionally fail. A big reason for this is that these tests use a combination of zinject and zfs_scan_vdev_limit to attempt to slow these operations enough to verify their test commands. This method works most of the time, but provides no guarantees and leads to flaky behavior. This patch adds a new tunable, zfs_scan_suspend_progress, that ensures that scans make no progress, guaranteeing that tests can be run without racing. This patch also changes zfs_remove_max_bytes_pause to match this new tunable. This provides some consistency between these two similar tunables and ensures that the tunable will not misbehave on 32-bit systems. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8111
* Fix coverity defects: CID 184285LOLi2018-11-111-2/+1
| | | | | | | | | | | | CID 184285: Read from pointer after free (USE_AFTER_FREE) This patch fixes an use-after-free in vdev_config_generate_stats() moving the kmem_free() call at the end of the function. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Giuseppe Di Natale <[email protected]> Signed-off-by: loli10K <[email protected]> Closes #8120
* zed: detect and offline physically removed devicesloli10K2018-11-092-0/+5
| | | | | | | | | | | | | | | | This commit adds a new test case to the ZFS Test Suite to verify ZED can detect when a device is physically removed from a running system: the device will be offlined if a spare is not available in the pool. We implement this by using the existing libudev functionality and without relying solely on the FM kernel module capabilities which have been observed to be unreliable with some kernels. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Don Brady <[email protected]> Signed-off-by: loli10K <[email protected]> Closes #1537 Closes #7926
* Add zpool status -s (slow I/Os) and -p (parseable)Tony Hutter2018-11-084-100/+159
| | | | | | | | | | | | | | | | | | This patch adds a new slow I/Os (-s) column to zpool status to show the number of VDEV slow I/Os. This is the number of I/Os that didn't complete in zio_slow_io_ms milliseconds. It also adds a new parsable (-p) flag to display exact values. NAME STATE READ WRITE CKSUM SLOW testpool ONLINE 0 0 0 - mirror-0 ONLINE 0 0 0 - loop0 ONLINE 0 0 0 20 loop1 ONLINE 0 0 0 0 Reviewed-by: Brian Behlendorf <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Signed-off-by: Tony Hutter <[email protected]> Closes #7756 Closes #6885
* Update zfs_admin_snapshot value (disabled)George Melikov2018-11-081-1/+2
| | | | | | | | | | | | | It's disabled by default, update code and tests to reflect the documentation. Minor cleanup in delegate_common.kshlib. Reviewed-by: Gregor Kopka <[email protected]> Reviewed-by: John Kennedy <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: George Melikov <[email protected]> Closes #7835 Closes #8045
* Fix divide by zero during indirect split damageTom Caputi2018-11-071-1/+8
| | | | | | | | | | | | This patch simply ensures that vdev_indirect_splits_damage() cannot hit a divide by zero exception if a split has no children with valid data. The normal reconstruction code path in vdev_indirect_reconstruct_io_done() already has this check. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8086
* Fix dirtying vdev config on with RO spaTom Caputi2018-11-071-2/+3
| | | | | | | | | | This patch simply corrects an issue where vdev_dtl_reassess() could attempt to dirty the vdev config even when the spa was not elligable for writing. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8085
* Replay logs before starting ztest workersTom Caputi2018-11-071-2/+9
| | | | | | | | | | | | | | | This patch ensures that logs are replayed on all datasets prior to starting ztest workers. This ensures that the call to vdev_offline() a log device in ztest_fault_inject() will not fail due to the log device being required for replay. This patch also fixes a small issue found during testing where spa_keystore_load_wkey() does not check that the dataset specified is an encryption root. This check was present in libzfs, however. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8084
* Fix vdev removal finishing raceTom Caputi2018-11-071-9/+6
| | | | | | | | | | | | | This patch fixes a race condition where the end of vdev_remove_replace_with_indirect(), which holds svr_lock, would race against spa_vdev_removal_destroy(), which destroys the same lock and is called asynchronously via dsl_sync_task_nowait(). Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Issue #6900 Closes #8083
* Make vdev_set_deferred_resilver() recursiveTom Caputi2018-11-071-1/+8
| | | | | | | | | | | | | | | vdev_clear() can call vdev_set_deferred_resilver() with a non-leaf vdev to setup a deferred resilver. However, this function is currently written to only handle leaf vdevs. This bug was introduced with deferred resilvers in 80a91e74. This patch makes this function recursive so that it can find appropriate vdevs to resilver and set vdev_resilver_deferred on them. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Issue #7732 Closes #8082
* ztest: reduce gangblock creationBrian Behlendorf2018-11-051-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to validate the gang block code ztest is configured to artificially force a fraction of large blocks to be written as gang blocks. The default setting chosen for this was to write 25% of all blocks 32k or larger using gang blocks. The confluence of an unrealistically large number of gang blocks, the aggressive fault injection done by ztest, and the split segment reconstruction logic introduced by device removal has resulted in the following type of failure: zdb -bccsv -G -d ... exit code 3 Specifically, zdb was unable to open the pool because it was unable to reconstruct a damaged block. Manual investigation of multiple failures clearly showed that the block could be reconstructed. However, due to the large number of damaged segments (>35) it could not be done in the allotted time. Furthermore, the large number of gang blocks was determined to be the reason for the unrealistically large number of damaged segments. In order to make this situation less likely, this change both increases the forced gang block size to 64k and reduces the frequency to 3% of blocks. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Tom Caputi <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #8080
* Add libzutil for libzfs or libzpool consumersDon Brady2018-11-051-0/+67
| | | | | | | | | | | Adds a libzutil for utility functions that are common to libzfs and libzpool consumers (most of what was in libzfs_import.c). This removes the need for utilities to link against both libzpool and libzfs. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Don Brady <[email protected]> Closes #8050
* bpobj_enqueue_subobj() should copy small subobj'sMatthew Ahrens2018-10-311-40/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we delete a snapshot, we consolidate some bpobj's together because we no longer need to keep their entries in separate buckets. This is done in constant time by including the "sub" bpobj by reference in the parent bpobj. After many snapshots have been deleted, we may have many sub-bpobj's. Usually, most sub-bpobj's don't contain many BP's. Compared to this small payload, the sub-bpobj is relatively heavyweight since it is a object in the MOS. A common scenario on a long-lived pool is for the vast majority of MOS objects to be small sub-bpobj's. To improve this situation, when consolidating bpobj's together, bpobj_enqueue_subobj() can copy the contents of small bpobj's into the parent, and then delete the enqueued bpobj, rather than including it by reference. Since this copying is limited in size (to one block), the consolidation is still constant time, though with a larger constant due to reading in the one block of the enqueued bpobj. This idea and mechanism are similar to how we handle "sub-subobj's". When including a sub-bpobj by reference, if the sub-bpobj itself has less than a block of sub-sub-bpobj's, the list of sub-sub-bpobj's is copied to the parent bpobj's list of sub-bpobj's. Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Paul Zuchowski <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #8053 Issue #7908
* Fix 2 small bugs with cached dsl_scan_phys_tTom Caputi2018-10-241-1/+4
| | | | | | | | | | | | | | This patch corrects 2 small bugs where scn->scn_phys_cached was not properly updated to match the primary copy when it needed to be. The first resulted in the pause state not being properly updated and the second resulted in the cached version being completely zeroed even if the primary was not. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* Fix ENXIO from spa_ld_verify_logs() in ztestTom Caputi2018-10-241-3/+9
| | | | | | | | | | | | | | | | This patch fixes a small issue where the zil_check_log_chain() code path would hit an EBUSY error. This would occur when 2 threads attempted to call metaslab_activate() at the same time. In this case, the "loser" would receive an error code which should have been ignored, but was instead floated to the caller. This ended up resulting in an ENXIO being returned from from spa_ld_verify_logs(). Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* Fix ztest deadman panic with indirect vdev damageTom Caputi2018-10-241-3/+3
| | | | | | | | | | | | | | This patch fixes an issue where ztest's deadman thread would trigger a panic because reconstructing artifically damaged blocks would take too long to reconstruct. This patch simply limits how often ztest inflicts split-block damage and how many segments it can damage when it does. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* Fix issue with scanning dedup blocks as scan endsTom Caputi2018-10-241-0/+16
| | | | | | | | | | | | | | This patch fixes an issue discovered by ztest where dsl_scan_ddt_entry() could add I/Os to the dsl scan queues between when the scan had finished all required work and when the scan was marked as complete. This caused the scan to spin indefinitely without ending. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* Fix lock inversion in txg_sync_thread()Tom Caputi2018-10-241-2/+2
| | | | | | | | | | | | | | | This patch fixes a lock inversion issue in txg_sync_thread() where the code would attempt hold the spa config lock as a reader while holding tx->tx_sync_lock. This races with spa_vdev_remove() which attempts to hold the tx->tx_sync_lock to assign a new tx (via spa_history_log_internal()) while holding the spa config lock as a writer. Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Co-authored-by: Brian Behlendorf <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* Fix dbgmsg printing in ztest and zdbTom Caputi2018-10-241-12/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch resolves a problem where the -G option in both zdb and ztest would cause the code to call __dprintf() to print zfs_dbgmsg output. This function was not properly wired to add messages to the dbgmsg log as it is in userspace and so the messages were simply dropped. This patch also tries to add some degree of distinction to dprintf() (which now prints directly to stdout) and zfs_dbgmsg() (which adds messages to an internal list that can be dumped with zfs_dbgmsg_print()). In addition, this patch corrects an issue where ztest used a global variable to decide whether to dump the dbgmsg buffer on a crash. This did not work because ztest spins up more instances of itself using execv(), which did not copy the global variable to the new process. The option has been moved to the ztest_shared_opts_t which already exists for interprocess communication. This patch also changes zfs_dbgmsg_print() to use write() calls instead of printf() so that it will not fail when used in a signal handler. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* Fix ASSERT in zil_create() during ztestTom Caputi2018-10-241-1/+2
| | | | | | | | | | | This patch corrects an ASSERT in zil_create() that will only be true if the call to zio_alloc_zil() does not fail. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* Fix random ztest_deadman_thread failuresTom Caputi2018-10-241-1/+1
| | | | | | | | | | | | | The zloop test has been failing in buildbot for the last few weeks with various failures in ztest_deadman_thread(). This is due to the fact that this thread is not stopped when performing pool import / export tests as it should be. This patch simply corrects this. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #8010
* OpenZFS 9688 - aggsum_fini leaks memoryPaul Dagnelie2018-10-192-2/+7
| | | | | | | | | | | | | | | | | | | | Porting Notes: - Most of these fixes were applied in the original 37fb3e43 commit when this change was ported for Linux. Authored by: Paul Dagnelie <[email protected]> Reviewed by: Serapheim Dimitropoulos <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Prashanth Sreenivasa <[email protected]> Reviewed by: Jorgen Lundman <[email protected]> Reviewed by: Igor Kozhukhov <[email protected]> Reviewed by: George Melikov <[email protected]> Approved by: Robert Mustacchi <[email protected]> Ported-by: Brian Behlendorf <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9688 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/29bf2d68be Closes #8042
* OpenZFS 9682 - page fault in dsl_async_clone_destroy() while opening poolSerapheim Dimitropoulos2018-10-191-3/+9
| | | | | | | | | | | | | | Authored by: Serapheim Dimitropoulos <[email protected]> Reviewed by: Brad Lewis <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Sara Hartse <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> Approved by: Robert Mustacchi <[email protected]> Ported-by: George Melikov <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9682 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ade2c82828 Closes #8037
* OpenZFS 9690 - metaslab of vdev with no space maps was flushed during removalSerapheim Dimitropoulos2018-10-191-12/+10
| | | | | | | | | | | | | Authored by: Serapheim Dimitropoulos <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Brad Lewis <[email protected]> Reviewed by: George Melikov <[email protected]> Approved by: Robert Mustacchi <[email protected]> Ported-by: Brian Behlendorf <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9690 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/4e75ba6826 Closes #8039
* OpenZFS 9681 - ztest failure in spa_history_log_internal due to spa_rename()Matthew Ahrens2018-10-191-54/+0
| | | | | | | | | | | | | | Authored by: Matthew Ahrens <[email protected]> Reviewed by: Prakash Surya <[email protected]> Reviewed by: Serapheim Dimitropoulos <[email protected]> Reviewed by: George Melikov <[email protected]> Reviewed by: Tom Caputi <[email protected]> Approved by: Robert Mustacchi <[email protected]> Ported-by: Brian Behlendorf <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/9681 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/6aee0ad7 Closes #8041
* Defer new resilvers until the current one endsTom Caputi2018-10-185-10/+155
| | | | | | | | | | | | | | | | | | | | | | | | | Currently, if a resilver is triggered for any reason while an existing one is running, zfs will immediately restart the existing resilver from the beginning to include the new drive. This causes problems for system administrators when a drive fails while another is already resilvering. In this case, the optimal thing to do to reduce risk of data loss is to wait for the current resilver to end before immediately replacing the second failed drive, which allows the system to operate with two incomplete drives for the minimum amount of time. This patch introduces the resilver_defer feature that essentially does this for the admin without forcing them to wait and monitor the resilver manually. The change requires an on-disk feature since we must mark drives that are part of a deferred resilver in the vdev config to ensure that we do not assume they are done resilvering when an existing resilver completes. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: John Kennedy <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: @mmaybee Signed-off-by: Tom Caputi <[email protected]> Closes #7732
* Linux does not HAVE_SMB_SHAREMatthew Ahrens2018-10-172-253/+0
| | | | | | | | | | Since Linux does not have an in-kernel SMB server, we don't need the code to manage it. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Richard Elling <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #8032
* Linux does not HAVE_DNLCMatthew Ahrens2018-10-172-65/+0
| | | | | | | | | | | Since Linux does not have the Directory Name Lookup Cache, we don't need the code to manage it. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Tim Chase <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Richard Elling <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #8031
* Add types to featureflags in zfsPaul Dagnelie2018-10-168-94/+253
| | | | | | | | | | | | | | | | | | | | The boolean featureflags in use thus far in ZFS are extremely useful, but because they take advantage of the zap layer, more interesting data than just a true/false value can be stored in a featureflag. In redacted send/receive, this is used to store the list of redaction snapshots for a redacted dataset. This change adds the ability for ZFS to store types other than a boolean in a featureflag. The only other implemented type is a uint64_t array. It also modifies the interfaces around dataset features to accomodate the new capabilities, and adds a few new functions to increase encapsulation. This functionality will be used by the Redacted Send/Receive feature. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Paul Dagnelie <[email protected]> Closes #7981