summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Bugfix/fix uio partial copiesFabio Scaccabarozzi2020-04-012-8/+26
| | | | | | | | | | | | | | | | In zfs_write(), the loop continues to the next iteration without accounting for partial copies occurring in uiomove_iov when copy_from_user/__copy_from_user_inatomic return a non-zero status. This results in "zfs: accessing past end of object..." in the kernel log, and the write failing. Account for partial copies and update uio struct before returning EFAULT, leave a comment explaining the reason why this is done. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: ilbsmart <[email protected]> Signed-off-by: Fabio Scaccabarozzi <[email protected]> Closes #8673 Closes #10148
* Improve ZVOL sync write performance by using a taskqMatthew Ahrens2020-03-311-44/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | == Summary == Prior to this change, sync writes to a zvol are processed serially. This commit makes zvols process concurrently outstanding sync writes in parallel, similar to how reads and async writes are already handled. The result is that the throughput of sync writes is tripled. == Background == When a write comes in for a zvol (e.g. over iscsi), it is processed by calling `zvol_request()` to initiate the operation. ZFS is expected to later call `BIO_END_IO()` when the operation completes (possibly from a different thread). There are a limited number of threads that are available to call `zvol_request()` - one one per iscsi client (unless using MC/S). Therefore, to ensure good performance, the latency of `zvol_request()` is important, so that many i/o operations to the zvol can be processed concurrently. In other words, if the client has multiple outstanding requests to the zvol, the zvol should have multiple outstanding requests to the storage hardware (i.e. issue multiple concurrent `zio_t`'s). For reads, and async writes (i.e. writes which can be acknowledged before the data reaches stable storage), `zvol_request()` achieves low latency by dispatching the bulk of the work (including waiting for i/o to disk) to a taskq. The taskq callback (`zvol_read()` or `zvol_write()`) blocks while waiting for the i/o to disk to complete. The `zvol_taskq` has 32 threads (by default), so we can have up to 32 concurrent i/os to disk in service of requests to zvols. However, for sync writes (i.e. writes which must be persisted to stable storage before they can be acknowledged, by calling `zil_commit()`), `zvol_request()` does not use `zvol_taskq`. Instead it blocks while waiting for the ZIL write to disk to complete. This has the effect of serializing sync writes to each zvol. In other words, each zvol will only process one sync write at a time, waiting for it to be written to the ZIL before accepting the next request. The same issue applies to FLUSH operations, for which `zvol_request()` calls `zil_commit()` directly. == Description of change == This commit changes `zvol_request()` to use `taskq_dispatch_ent(zvol_taskq)` for sync writes, and FLUSh operations. Therefore we can have up to 32 threads (the taskq threads) simultaneously calling `zil_commit()`, for a theoretical performance improvement of up to 32x. To avoid the locking issue described in the comment (which this commit removes), we acquire the rangelock from the taskq callback (e.g. `zvol_write()`) rather than from `zvol_request()`. This applies to all writes (sync and async), reads, and discard operations. This means that multiple simultaneously-outstanding i/o's which access the same block can complete in any order. This was previously thought to be incorrect, but a review of the block device interface requirements revealed that this is fine - the order is inherently not defined. The shorter hold time of the rangelock should also have a slight performance improvement. For an additional slight performance improvement, we use `taskq_dispatch_ent()` instead of `taskq_dispatch()`, which avoids a `kmem_alloc()` and eliminates a failure mode. This applies to all writes (sync and async), reads, and discard operations. == Performance results == We used a zvol as an iscsi target (server) for a Windows initiator (client), with a single connection (the default - i.e. not MC/S). We used `diskspd` to generate a workload with 4 threads, doing 1MB writes to random offsets in the zvol. Without this change we get 231MB/s, and with the change we get 728MB/s, which is 3.15x the original performance. We ran a real-world workload, restoring a MSSQL database, and saw throughput 2.5x the original. We saw more modest performance wins (typically 1.5x-2x) when using MC/S with 4 connections, and with different number of client threads (1, 8, 32). Reviewed-by: Tony Nguyen <[email protected]> Reviewed-by: Pavel Zakharov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10163
* Reset l2ad_hand and l2ad_first in l2arc_evictGeorge Amanakis2020-03-316-27/+181
| | | | | | | | | | | | | | | | | | | | Increasing l2arc_write_size or l2arc_write_boost can result in l2arc_write_buffers() not having enough space to perform its writes and panic zio_write_phys(). Instead of resetting l2ad_hand to l2ad_start at the end of l2arc_write_buffers() and not taking into account a possible user-mediated increase of l2arc_write_max, we do this in l2arc_evict(), right after l2arc_write_size() has run. If there is not enough space to evict (ie we will exceed l2ad_end) we evict to the end of the device, reset l2ad_hand to l2ad_start, set l2ad_first to 0 and iterate l2arc_evict(). We avoid infinite iteration of l2arc_evict() by making sure in l2arc_write_size() that l2ad_start + size does not exceed l2ad_end. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: George Amanakis <[email protected]> Closes #10154
* ZTS: Skip udev actions in zvol_misc when not LinuxRyan Moeller2020-03-311-3/+3
| | | | | | | | | | | udev is only used on Linux. Skip udev_wait and udev_cleanup when not on Linux. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10165
* Let default arc_c_max be platform dependentRyan Moeller2020-03-274-8/+22
| | | | | | | | | | | | Linux changed the default max ARC size to 1/2 of physical memory to deal with shortcomings of the Linux SLUB allocator. Other platforms do not require the same logic. Implement an arc_default_max() function to determine a default max ARC size in platform code. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10155
* Compile cityhash code into libzfsMatthew Ahrens2020-03-2710-6/+11
| | | | | | | | | Make the cityhash code compile into libzfs, in preparation for the new "zstream" command. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10152
* ZTS: Wait for free space between quota testsRyan Moeller2020-03-265-7/+15
| | | | | | | | And in removal tests, sync the specific pool we are waiting on. Reviewed-by: John Kennedy <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10146
* Remove checks for null out value in encryption pathsDirkjan Bussink2020-03-2611-219/+106
| | | | | | | | | | | These paths are never exercised, as the parameters given are always different cipher and plaintext `crypto_data_t` pointers. Reviewed-by: Richard Laager <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Attila Fueloep <[email protected]> Signed-off-by: Dirkjan Bussink <[email protected]> Closes #9661 Closes #10015
* zfs_get: change time format string from %k to %Halex2020-03-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Issue #10090 reported that snapshots created between midnight and 1 AM are missing a padded zero in the creation property This change fixes the bug reported in issue #10090 where snapshots created between midnight and 1 AM were missing a padded zero in the creation timestamp output. The leading zero was missing because the time format string used `%k` which formats the hour as a decimal number from 0 to 23 where single digits are preceded by blanks[0] and is fixed by changing it to `%H` which formats the hour as 00-23. The difference in output is as below ``` -Thu Mar 26 0:39 2020 +Thu Mar 26 00:39 2020 ``` Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Igor Kozhukhov <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Alex John <[email protected]> Closes #10090 Closes #10153
* Deprecate deduplicated send streamsMatthew Ahrens2020-03-185-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dedup send can only deduplicate over the set of blocks in the send command being invoked, and it does not take advantage of the dedup table to do so. This is a very common misconception among not only users, but developers, and makes the feature seem more useful than it is. As a result, many users are using the feature but not getting any benefit from it. Dedup send requires a nontrivial expenditure of memory and CPU to operate, especially if the dataset(s) being sent is (are) not already using a dedup-strength checksum. Dedup send adds developer burden. It expands the test matrix when developing new features, causing bugs in released code, and delaying development efforts by forcing more testing to be done. As a result, we are deprecating the use of `zfs send -D` and receiving of such streams. This change adds a warning to the man page, and also prints the warning whenever dedup send or receive are used. In a future release, we plan to: 1. remove the kernel code for generating deduplicated streams 2. make `zfs send -D` generate regular, non-deduplicated streams 3. remove the kernel code for receiving deduplicated streams 4. make `zfs receive` of deduplicated streams process them in userland to "re-duplicate" them, so that they can still be received. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #7887 Closes #10117
* Avoid core dump on invalid redaction bookmarkRyan Moeller2020-03-184-25/+51
| | | | | | | | | | | | | | | | libzfs aborts and dumps core on EINVAL from the kernel when trying to do a redacted send with a bookmark that is not a redaction bookmark. Move redacted bookmark validation into libzfs. Check if the bookmark given for redactions is actually a redaction bookmark. Print an error message and exit gracefully if it is not. Don't abort on EINVAL in zfs_send_one. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10138
* Changed decimals to integers in the arcstat scriptAvatat2020-03-181-13/+4
| | | | | | | | | | | | | | Changed interval value type from decimal to integer, because of deprecation warning in Python 3.8 and above. Also changed kstat values type from decimal to integer, because all the values are integers. Fixed behavior of arcstat when run without args. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Bartosz Zieba <[email protected]> Closes #10132 Closes #10142
* Fix zfs_rmnode() unlink / rollback issueBrian Behlendorf2020-03-181-3/+9
| | | | | | | | | | If a has rollback has occurred while a file is open and unlinked. Then when the file is closed post rollback it will not exist in the rolled back version of the unlinked object. Therefore, the call to zap_remove_int() may correctly return ENOENT and should be allowed. Signed-off-by: Brian Behlendorf <[email protected]> Closes #6812 Closes #9739
* Fix cstyle warningsBrian Behlendorf2020-03-172-2/+3
| | | | | | | Fix minor cstyle warnings accidentally introduced by 7145123b. Reviewed-by: Paul Dagnelie <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #10143
* Separate warning for incomplete and corrupt streamsPaul Dagnelie2020-03-178-11/+24
| | | | | | | | | | This change adds a separate return code to zfs_ioc_recv that is used for incomplete streams, in addition to the existing return code for streams that contain corruption. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Paul Dagnelie <[email protected]> Closes #10122
* ICP: gcm-avx: Support architectures lacking the MOVBE instructionAttila Fülöp2020-03-173-18/+389
| | | | | | | | | | | | | | | | | There are a couple of x86_64 architectures which support all needed features to make the accelerated GCM implementation work but the MOVBE instruction. Those are mainly Intel Sandy- and Ivy-Bridge and AMD Bulldozer, Piledriver, and Steamroller. By using MOVBE only if available and replacing it with a MOV followed by a BSWAP if not, those architectures now benefit from the new GCM routines and performance is considerably better compared to the original implementation. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Adam D. Moss <[email protected]> Signed-off-by: Attila Fülöp <[email protected]> Followup #9749 Closes #10029
* Add option for forcible unmounting dataset while receiving snapshot.Mariusz Zaborski2020-03-177-14/+113
| | | | | | | | | | | | | | | | | | | | | | | | Currently when the dataset is in use we can't receive snapshots. zfs send test/1@asd | zfs recv -FM test/2 cannot unmount '/test/2': Device busy This commits add option 'M' which attempts to forcibly unmount the dataset. Thanks to this we can enforce receiving snapshots in a single step. Note that this functionality is not supported on Linux because the VFS will prevent active mounted filesystems from being unmounted, even with the force option. This is the intended VFS behavior. Test cases were added to verify the expected behavior based on the platform. Discussed-with: Pawel Jakub Dawidek <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Allan Jude <[email protected]> External-issue: https://reviews.freebsd.org/D22306 Closes #9904
* ZTS: Use default_cleanup_noexit where neededRyan Moeller2020-03-173-3/+7
| | | | | | | | | And add log_pass appropriately. Reviewed-by: John Kennedy <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10136
* Exit status 256+signum is actually baked in to kshRyan Moeller2020-03-171-9/+3
| | | | | | | | | | | | | | While #10121 did fix the signal numbers for FreeBSD/Darwin, it incorrectly changed the expected encoding of exit status for commands that exited on a signal. The encoding 256+signum is a feature of the shell. Only the signal numbers themselves are platform-dependent. Always use the encoding 256+signum when checking exit status for signal exits. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10137
* libzfs: Fix bounds checks for float parsingRyan Moeller2020-03-162-2/+12
| | | | | | | | | | UINT64_MAX is not exactly representable as a double. The closest representation is UINT64_MAX + 1, so we can use a >= comparison instead of > for the bounds check. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10127
* Improve zfs receive performance by batching writesMatthew Ahrens2020-03-162-51/+182
| | | | | | | | | | | | | | | | | | | | | | | | | For each WRITE record in the stream, `zfs receive` creates a DMU transaction (`dmu_tx_create()`) and writes this block's data into the object. If per-block overheads (as opposed to per-byte overheads) dominate performance (as is often the case with small recordsize), the per-dmu-transaction overheads can be significant. For example, in some workloads the `receieve_writer` thread is 100% on CPU, and more than half of its CPU time is in these per-tx routines (e.g. dmu_tx_hold_write, dmu_tx_assign, dmu_tx_commit). To improve performance of `zfs receive`, this commit batches WRITE records which are to nearby offsets of the same object, and uses one DMU transaction to write them all. By default the batch size is 1MB, which for recordsize=8K reduces the number of DMU transactions by 128x for full send streams (incrementals will depend on how "clumpy" the changed blocks are). This commit improves the performance of `dd if=stream | zfs recv` from 78,800 blocks/sec to 98,100 blocks/sec (25% improvement). Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10099
* Remove CI builder customization from TESTBrian Behlendorf2020-03-161-61/+0
| | | | | | | | | | | | | | The default options are reasonable for all of the CI builders. * TEST_XFSTESTS_SKIP=yes - This is already the default. * TEST_ZTEST_TIMEOUT=3600 - Increased ztest run time only increases code coverage by a small degree. Default 900s runs are sufficient. * Disabling certain tests on 32-bit builders is no longer needed. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Kjeld Schouten <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #10129
* ZTS: Update flaky tests in zts-reportRyan Moeller2020-03-131-7/+20
| | | | | | | | | | | | | | Some tests which pass on FreeBSD but fail on Linux had been put in the "maybe" set. Move these back to "known" under an "if Linux" check so the expected outcome is clear. Add some tests that have been found to be flaky on FreeBSD stable/12 to the "maybe" set. Reviewed-by: John Kennedy <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10120
* dmu_objset_from_ds must be called with dp_config_rwlock heldMatthew Ahrens2020-03-127-115/+101
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The normal lock order is that the dp_config_rwlock must be held before the ds_opening_lock. For example, dmu_objset_hold() does this. However, dmu_objset_open_impl() is called with the ds_opening_lock held, and if the dp_config_rwlock is not already held, it will attempt to acquire it. This may lead to deadlock, since the lock order is reversed. Looking at all the callers of dmu_objset_open_impl() (which is principally the callers of dmu_objset_from_ds()), almost all callers already have the dp_config_rwlock. However, there are a few places in the send and receive code paths that do not. For example: dsl_crypto_populate_key_nvlist, send_cb, dmu_recv_stream, receive_write_byref, redact_traverse_thread. This commit resolves the problem by requiring all callers ot dmu_objset_from_ds() to hold the dp_config_rwlock. In most cases, the code has been restructured such that we call dmu_objset_from_ds() earlier on in the send and receive processes, when we already have the dp_config_rwlock, and save the objset_t until we need it in the middle of the send or receive (similar to what we already do with the dsl_dataset_t). Thus we do not need to acquire the dp_config_rwlock in many new places. I also cleaned up code in dmu_redact_snap() and send_traverse_thread(). Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Paul Zuchowski <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #9662 Closes #10115
* Fix infinite scan on a pool with only special allocationsAlexander Motin2020-03-121-3/+6
| | | | | | | | | | | | | | Attempt to run scrub or resilver on a new pool containing only special allocations (special vdev added on creation) caused infinite loop because of dsl_scan_should_clear() limiting memory usage to 5% of pool size, which it calculated accounting only normal allocation class. Addition of special and just in case dedup classes fixes the issue. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored-By: iXsystems, Inc. Closes #10106 Closes #8694
* ZTS: Use correct signal numbers for status checksRyan Moeller2020-03-121-6/+29
| | | | | | | | | | | | Different operating systems encode exit status in different ways. The logapi shell library assumes the Solaris meaning of exit codes, which is not correct on other platforms. Define the needed constants according to the platform we are running on and use those to decode process exit status. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10121
* ZTS: Test boundary conditions in alloc_class_012Ryan Moeller2020-03-122-39/+57
| | | | | | | | | | | | | | | Issue #9142 describes an error in the checks for device removal that can prevent removal of special allocation class vdevs in some situations. Enhance alloc_class/alloc_class_012_pos to check situations where this bug occurs. Update zts-report with knowledge of issue #9142. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10116 Issue #9142
* ZTS: Wait for free space between write_dirs testsRyan Moeller2020-03-124-1/+8
| | | | | | | | | | | | | | | | | Cleanup for write_dirs involves destroying a dataset filling a pool and then recreating the dataset for the next test. Due to the asynchronous nature of free space accounting, recreating the dataset can fail for lack of space, causing problems for the next test. Add wait_freeing $TESTPOOL to wait for the space to be freed and then sync_pool $TESTPOOL to update the space accounting before attempting to recreate the test filesystem. Only use a single disk to create the pool. Make it a small file so it does not take too long to fill. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10112
* Prevent race condition in dnode_dest (#10101)John Poduska2020-03-123-6/+15
| | | | | | | | | | | | | | | | | | | | | | | dnode_special_close() waits for the refcount of dn_holds to go to zero without holding the dn_mtx. dnode_rele_and_unlock() does the final remove to dn_holds with dn_mtx being held: refs = zfs_refcount_remove(&dn->dn_holds, tag); mutex_exit(&dn->dn_mtx); So, there is a race condition after the remove until dn_mtx is dropped. During that time, dnode_destroy() can get called, which ends up in dnode_dest() calling mutex_destroy() and a panic since the lock is still held. This change adds a condvar to wait for the final dnode_rele_and_unlock() to release the dn_mtx before calling dnode_destroy(). Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: John Poduska <[email protected]> Closes #7814 Closes #10101
* Prevent deadlock in arc_read in Linux memory reclaim callbackMark Roper2020-03-121-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using zfs with Lustre, an arc_read can trigger kernel memory allocation that in turn leads to a memory reclaim callback and a deadlock within a single zfs process. This change uses spl_fstrans_mark and spl_trans_unmark to prevent the reclaim attempt and the deadlock (https://zfsonlinux.topicbox.com/groups/zfs-devel/T4db2c705ec1804ba). The stack trace observed is: __schedule at ffffffff81610f2e schedule at ffffffff81611558 schedule_preempt_disabled at ffffffff8161184a __mutex_lock at ffffffff816131e8 arc_buf_destroy at ffffffffa0bf37d7 [zfs] dbuf_destroy at ffffffffa0bfa6fe [zfs] dbuf_evict_one at ffffffffa0bfaa96 [zfs] dbuf_rele_and_unlock at ffffffffa0bfa561 [zfs] dbuf_rele_and_unlock at ffffffffa0bfa32b [zfs] osd_object_delete at ffffffffa0b64ecc [osd_zfs] lu_object_free at ffffffffa06d6a74 [obdclass] lu_site_purge_objects at ffffffffa06d7fc1 [obdclass] lu_cache_shrink_scan at ffffffffa06d81b8 [obdclass] shrink_slab at ffffffff811ca9d8 shrink_node at ffffffff811cfd94 do_try_to_free_pages at ffffffff811cfe63 try_to_free_pages at ffffffff811d01c4 __alloc_pages_slowpath at ffffffff811be7f2 __alloc_pages_nodemask at ffffffff811bf3ed new_slab at ffffffff81226304 ___slab_alloc at ffffffff812272ab __slab_alloc at ffffffff8122740c kmem_cache_alloc at ffffffff81227578 spl_kmem_cache_alloc at ffffffffa048a1fd [spl] arc_buf_alloc_impl at ffffffffa0befba2 [zfs] arc_read at ffffffffa0bf0924 [zfs] dbuf_read at ffffffffa0bf9083 [zfs] dmu_buf_hold_by_dnode at ffffffffa0c04869 [zfs] Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Mark Roper <[email protected]> Closes #9987
* zloop.sh should call ZDB with pool nameOlaf Faaland2020-03-111-1/+1
| | | | | | | | | | | Commit 54007c79 introduced an error, changing the final argument to $ZDB from ztest to $ZTEST. This argument indicates the pool name, not the script, and so should not have been changed. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Olaf Faaland <[email protected]> Closes #10118
* ZTS: Add a failsafe callback to run after each testRyan Moeller2020-03-107-48/+145
| | | | | | | | | | | | | | | | | | | Tests that get killed do not have an opportunity to clean up. There are many bad states this can leave the system in, but of particular gravity is when zinject has been used to induce bad behavior for one or more of the test disks. Create a failsafe mechanism in test-runner.py that runs a callback script after every test. The script is common to all tests so all tests benefit from the protection. Add an obligatory `zinject -c all` to clear all zinject state after every test case is run. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10096
* Improve zfs send performance by bypassing the ARCMatthew Ahrens2020-03-104-151/+235
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When doing a zfs send on a dataset with small recordsize (e.g. 8K), performance is dominated by the per-block overheads. This is especially true with `zfs send --compressed`, which further reduces the amount of data sent, for the same number of blocks. Several threads are involved, but the limiting factor is the `send_prefetch` thread, which is 100% on CPU. The main job of the `send_prefetch` thread is to issue zio's for the data that will be needed by the main thread. It does this by calling `arc_read(ARC_FLAG_PREFETCH)`. This has an immediate cost of creating an arc_hdr, which takes around 14% of one CPU. It also induces later costs by other threads: * Since the data was only prefetched, dmu_send()->dmu_dump_write() will need to call arc_read() again to get the data. This will have to look up the arc_hdr in the hash table and copy the data from the scatter ABD in the arc_hdr to a linear ABD in arc_buf. This takes 27% of one CPU. * dmu_dump_write() needs to arc_buf_destroy() This takes 11% of one CPU. * arc_adjust() will need to evict this arc_hdr, taking about 50% of one CPU. All of these costs can be avoided by bypassing the ARC if the data is not already cached. This commit changes `zfs send` to check for the data in the ARC, and if it is not found then we directly call `zio_read()`, reading the data into a linear ABD which is used by dmu_dump_write() directly. The performance improvement is best expressed in terms of how many blocks can be processed by `zfs send` in one second. This change increases the metric by 50%, from ~100,000 to ~150,000. When the amount of data per block is small (e.g. 2KB), there is a corresponding reduction in the elapsed time of `zfs send >/dev/null` (from 86 minutes to 58 minutes in this test case). In addition to improving the performance of `zfs send`, this change makes `zfs send` not pollute the ARC cache. In most cases the data will not be reused, so this allows us to keep caching useful data in the MRU (hit-once) part of the ARC. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10067
* ZTS: Simplify some libtest functionsRyan Moeller2020-03-101-30/+9
| | | | | | | | | | | | | Don't echo the results of arithmetic expressions, it's not necessary. Use hw.clockrate sysctl to get CPU freq instead of parsing dmesg.boot for a line that might not even be there anymore. Reduce bookkeeping in fill_fs, making it easier to follow. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10113
* Fix zfs-functions packaging bugRichard Laager2020-03-1012-27/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a bug where the generated zfs-functions was being included along with original zfs-functions.in in the make dist tarball. This caused an unfortunate series of events during build/packaging that resulted in the RPM-installed /etc/zfs/zfs-functions listing the paths as: ZFS="/usr/local/sbin/zfs" ZED="/usr/local/sbin/zed" ZPOOL="/usr/local/sbin/zpool" When they should have been: ZFS="/sbin/zfs" ZED="/sbin/zed" ZPOOL="/sbin/zpool" This affects init.d (non-systemd) distros like CentOS 6. /etc/default/zfs and /etc/zfs/zfs-functions are also used by the initramfs, so they need to be built even when init.d support is not. They have been moved to the (new) etc/default and (existing) etc/zfs source directories, respectively. Fixes: #9443 Co-authored-by: Tony Hutter <[email protected]> Signed-off-by: Richard Laager <[email protected]>
* initramfs: Eliminate substitutionsRichard Laager2020-03-102-26/+2
| | | | | | | These are now handled in zfs-functions, so this is all duplicative and unnecessary. Signed-off-by: Richard Laager <[email protected]>
* Delete built init scripts in make cleanRichard Laager2020-03-101-3/+1
| | | | | | | | Previously, they were being deleted in make distclean. This brings it in line with the example: https://www.gnu.org/software/automake/manual/html_node/Scripts.html Signed-off-by: Richard Laager <[email protected]>
* Make init scripts depend on MakefileRichard Laager2020-03-101-1/+1
| | | | | | | | | This brings it in line with the example: https://www.gnu.org/software/automake/manual/html_node/Scripts.html This way, if the substitution code is changed, they should update. Signed-off-by: Richard Laager <[email protected]>
* Systemd mount generator: don't fail keyload from file if already loadedInsanePrawn2020-03-091-7/+11
| | | | | | | | | | | | | Previously the generated keyload units for encryption roots with keylocation=file://* didn't contain the code to detect if the key was already loaded and would be marked failed in such situations. Move the code to check whether the key is already loaded from keylocation=prompt handling to general key loading code. Reviewed-by: Richard Laager <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: InsanePrawn <[email protected]> Closes #10103
* ZTS: Another round of changes for FreeBSDRyan Moeller2020-03-0611-59/+82
| | | | | | | | | | | | | | | | | Highlights: * is_linux -> is_illumos swaps * make block_device_wait more clever when paths are given * slightly optimize default_cleanup_noexit * remove platform differences in user_run * temporarily expect non-libfetch behavior for keylocation=/foo/bar * fix sharenfs exceptions * don't test multihost property * fix misc broken platform checks * clear zinjected faults in removal_resume_export callback Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10092
* Change default to overlay=onRyan Moeller2020-03-069-64/+53
| | | | | | | | | | | | | | Filesystems allow overlay mounts by default on FreeBSD and Linux. Respect the native convention by switching the default to overlay=on, while retaining the option to turn the property off for compatibility with other operating systems' conventions. Update documentation and tests accordingly. Reviewed-by: Richard Laager <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10030
* ZTS: Update zts-report exceptions for FreeBSDRyan Moeller2020-03-061-2/+3
| | | | | | | | | | The new zfs_sync_trim_* tests are skipped on FreeBSD. Both of the previously failing tests are now passing. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10105
* ZTS: Speed up write_dirs cleanupBrian Behlendorf2020-03-042-10/+4
| | | | | | | | | | | | | The write_dirs tests fill a filesystem with a bunch of files until it is full. In cleanup the files are truncated and removed individually. These tests already take a while to run. It is quicker and easier to destroy the whole dataset and create a new one to replace it in the cleanup functions. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10098
* ZTS: Add missing quotesBrian Behlendorf2020-03-042-2/+2
| | | | | | | | | | | | | | `default_setup` takes a disk list as the first argument and has optional additional arguments that control secondary functionality. A couple of test setups mistakenly call `default_setup $DISKS`. Add quotes so the second and subsequent disks are correctly included in the pool as vdevs rather than triggering unwanted behavior from `default_setup`. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10097
* ZTS: Add zts-report exceptions for FreeBSDBrian Behlendorf2020-03-042-5/+16
| | | | | | | | | | | | | | | | | | There are three tests we expect to fail only on FreeBSD. * link_count never exits and eventually times out: - @amotin tells me this test is probably not applicable to us - Skip on FreeBSD * userobj feature does not activate immediately after pool upgrade - low impact; we are aware of this issue * removal does not appear to condense on export - low impact; we are aware of this issue Additionally removal_with_zdb passes on FreeBSD, so it is moved to "maybe". Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10093
* zio: dprintf_bp() if errors > 0 in zfs_blkptr_verify()Brian Behlendorf2020-03-041-0/+3
| | | | | | | | | Also dprintf_bp() in case BLK_VERIFY_HALT of zfs_blkptr_verify_log() since dprintf_bp() in zfs_blkptr_verify() will never be executed. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Paul Zuchowski <[email protected]> Signed-off-by: Justin Keogh <[email protected]> Closes #10086
* ZTS: Test the correct filesystem_limits behaviorBrian Behlendorf2020-03-045-17/+126
| | | | | | | | | | | | | | | | | | | | See issue #8226: Property filesystem_limit does not work as documented There have been previous attempts to fix the behavior on Linux, but so far the issue is still open. See PRs #8228, #8280. The existing tests pass for the incorrect behavior. This is a problem on FreeBSD; we are failing the tests because we implement the feature correctly. I have adapted the tests based on the work by @loli10k in #8280 and extended the changes to fix the snapshot_limit test as well. Linux now fails these tests, so entries linking to the issue have been added to the "maybe" group in zts-report.py. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10082
* Add trim support to zpool waitBrian Behlendorf2020-03-0417-73/+347
| | | | | | | | | | | | Manual trims fall into the category of long-running pool activities which people might want to wait synchronously for. This change adds support to 'zpool wait' for waiting for manual trim operations to complete. It also adds a '-w' flag to 'zpool trim' which can be used to turn 'zpool trim' into a synchronous operation. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Signed-off-by: John Gallagher <[email protected]> Closes #10071
* Improve performance of zio_taskq_memberMatthew Ahrens2020-03-035-2/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | __zio_execute() calls zio_taskq_member() to determine if we are running in a zio interrupt taskq, in which case we may need to switch to processing this zio in a zio issue taskq. The call to zio_taskq_member() can become a performance bottleneck when we are processing a high rate of zio's. zio_taskq_member() calls taskq_member() on each of the zio interrupt taskqs, of which there are 21. This is slow because each call to taskq_member() does tsd_get(taskq_tsd), which on Linux is relatively slow. This commit improves the performance of zio_taskq_member() by having it cache the value of tsd_get(taskq_tsd), reducing the number of those calls to 1/21th of the current behavior. In a test case running `zfs send -c >/dev/null` of a filesystem with small blocks (average 2.5KB/block), zio_taskq_member() was using 6.7% of one CPU, and with this change it is reduced to 1.3%. Overall time to perform the `zfs send` reduced by 10% (~150,000 block/sec to ~165,000 blocks/sec). Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Tony Nguyen <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10070
* ZTS: Provide for nested cleanup routinesRyan Moeller2020-03-032-5/+25
| | | | | | | | | | | | | | | | | | | | | | | Shared test library functions lack a simple way to ensure proper cleanup in the event of a failure. The `log_onexit` cleanup pattern cannot be used in library functions because it uses one global variable to store the cleanup command. An example of where this is a serious issue is when a tunable that artifically stalls kernel progress gets activated and then some check fails. Unless the caller knows about the tunable and sets it back, the system will be left in a bad state. To solve this problem, turn the global cleanup variable into a stack. Provide push and pop functions to add additional cleanup steps and remove them after it is safe again. The first use of this new functionality is in attempt_during_removal, which sets REMOVAL_SUSPEND_PROGRESS. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10080