aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Add `zstream redup` command to convert deduplicated send streamsMatthew Ahrens2020-04-1016-36/+728
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Deduplicated send and receive is deprecated. To ease migration to the new dedup-send-less world, the commit adds a `zstream redup` utility to convert deduplicated send streams to normal streams, so that they can continue to be received indefinitely. The new `zstream` command also replaces the functionality of `zstreamdump`, by way of the `zstream dump` subcommand. The `zstreamdump` command is replaced by a shell script which invokes `zstream dump`. The way that `zstream redup` works under the hood is that as we read the send stream, we build up a hash table which maps from `<GUID, object, offset> -> <file_offset>`. Whenever we see a WRITE record, we add a new entry to the hash table, which indicates where in the stream file to find the WRITE record for this block. (The key is `drr_toguid, drr_object, drr_offset`.) For entries other than WRITE_BYREF, we pass them through unchanged (except for the running checksum, which is recalculated). For WRITE_BYREF records, we change them to WRITE records. We find the referenced WRITE record by looking in the hash table (for the record with key `drr_refguid, drr_refobject, drr_refoffset`), and then reading the record header and payload from the specified offset in the stream file. This is why the stream can not be a pipe. The found WRITE record replaces the WRITE_BYREF record, with its `drr_toguid`, `drr_object`, and `drr_offset` fields changed to be the same as the WRITE_BYREF's (i.e. we are writing the same logical block, but with the data supplied by the previous WRITE record). This algorithm requires memory proportional to the number of WRITE records (same as `zfs send -D`), but the size per WRITE record is relatively low (40 bytes, vs. 72 for `zfs send -D`). A 1TB send stream with 8KB blocks (`recordsize=8k`) would use around 5GB of RAM to "redup". Reviewed-by: Jorgen Lundman <[email protected]> Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10124 Closes #10156
* Persistent L2ARCGeorge Amanakis2020-04-1030-88/+3020
| | | | | | | | | | | | | | | | | | | | | This commit makes the L2ARC persistent across reboots. We implement a light-weight persistent L2ARC metadata structure that allows L2ARC contents to be recovered after a reboot. This significantly eases the impact a reboot has on read performance on systems with large caches. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: George Wilson <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Co-authored-by: Saso Kiselkov <[email protected]> Co-authored-by: Jorgen Lundman <[email protected]> Co-authored-by: George Amanakis <[email protected]> Ported-by: Yuxuan Shui <[email protected]> Signed-off-by: George Amanakis <[email protected]> Closes #925 Closes #1823 Closes #2672 Closes #3744 Closes #9582
* Don't ignore zfs_arc_max below allmem/32Ryan Moeller2020-04-093-15/+31
| | | | | | | | | | | Set arc_c_min before arc_c_max so that when zfs_arc_min is set lower than the default allmem/32 zfs_arc_max can also be set lower. Add warning messages when tunables are being ignored. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10157 Closes #10158
* Add separate field for indicating that spa is in middle of splitMatthew Macy2020-04-092-0/+3
| | | | | | | | | | | By default it's not possible to open a device already owned by an active vdev. It's necessary to make an exception to this for vdev split. The FreeBSD platform code will make an exception if spa_is splitting is set to to true. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Matt Macy <[email protected]> Closes #10178
* Linux 5.7 compat: blk_alloc_queue()Brian Behlendorf2020-04-094-44/+87
| | | | | | | | | | | | | | | | | | Commit https://github.com/torvalds/linux/commit/3d745ea5 simplified the blk_alloc_queue() interface by updating it to take the request queue as an argument. Add a wrapper function which accepts the new arguments and internally uses the available interfaces. Other minor changes include increasing the Linux-Maximum to 5.6 now that 5.6 has been released. It was not bumped to 5.7 because this release has not yet been finalized and is still subject to change. Added local 'struct zvol_state_os *zso' variable to zvol_alloc. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #10181 Closes #10187
* Use vn_io_fault_uiomove on FreeBSD to avoid potential deadlockMatthew Macy2020-04-081-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added to prevent a possible deadlock, the following comments from FreeBSD explain the issue. The comment describing vn_io_fault_uiomove: /* * Helper function to perform the requested uiomove operation using * the held pages for io->uio_iov[0].iov_base buffer instead of * copyin/copyout. Access to the pages with uiomove_fromphys() * instead of iov_base prevents page faults that could occur due to * pmap_collect() invalidating the mapping created by * vm_fault_quick_hold_pages(), or pageout daemon, page laundry or * object cleanup revoking the write access from page mappings. * * Filesystems specified MNTK_NO_IOPF shall use vn_io_fault_uiomove() * instead of plain uiomove(). */ This used for vn_io_fault which has the following motivation: /* * The vn_io_fault() is a wrapper around vn_read() and vn_write() to * prevent the following deadlock: * * Assume that the thread A reads from the vnode vp1 into userspace * buffer buf1 backed by the pages of vnode vp2. If a page in buf1 is * currently not resident, then system ends up with the call chain * vn_read() -> VOP_READ(vp1) -> uiomove() -> [Page Fault] -> * vm_fault(buf1) -> vnode_pager_getpages(vp2) -> VOP_GETPAGES(vp2) * which establishes lock order vp1->vn_lock, then vp2->vn_lock. * If, at the same time, thread B reads from vnode vp2 into buffer buf2 * backed by the pages of vnode vp1, and some page in buf2 is not * resident, we get a reversed order vp2->vn_lock, then vp1->vn_lock. * * To prevent the lock order reversal and deadlock, vn_io_fault() does * not allow page faults to happen during VOP_READ() or VOP_WRITE(). * Instead, it first tries to do the whole range i/o with pagefaults * disabled. If all pages in the i/o buffer are resident and mapped, * VOP will succeed (ignoring the genuine filesystem errors). * Otherwise, we get back EFAULT, and vn_io_fault() falls back to do * i/o in chunks, with all pages in the chunk prefaulted and held * using vm_fault_quick_hold_pages(). * * Filesystems using this deadlock avoidance scheme should use the * array of the held pages from uio, saved in the curthread->td_ma, * instead of doing uiomove(). A helper function * vn_io_fault_uiomove() converts uiomove request into * uiomove_fromphys() over td_ma array. * * Since vnode locks do not cover the whole i/o anymore, rangelocks * make the current i/o request atomic with respect to other i/os and * truncations. */ Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matt Macy <[email protected]> Closes #10177
* Finish refactoring for ZFS_MODULE_PARAM_CALLRyan Moeller2020-04-075-8/+11
| | | | | | | | | | | | | | | Linux and FreeBSD have different parameters for tunable proc handler. This has prevented FreeBSD from implementing the ZFS_MODULE_PARAM_CALL macro. To complete the sharing of ZFS_MODULE_PARAM_CALL declarations, create per-platform definitions of the parameter list, ZFS_MODULE_PARAM_ARGS. With the declarations wired up we discovered an incorrect scope prefix for spa_slop_shift, so this is now fixed. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10179
* libzfs_pool: Remove unused check for ENOTBLKalex2020-04-071-11/+0
| | | | | | | | | | | Commit 379ca9c removed the check on aux devices to be block devices also changing zfs_ioctl(hdl, ZFS_IOC_VDEV_ADD, ...) and zfs_ioctl(hdl, ZFS_IOC_POOL_CREATE, ...) to never set ENOTBLK. This change removes the dangling check for ENOTBLK that will never trigger. Reviewed-by: Brian Behlendorf <[email protected]> Reported-by: Richard Elling <[email protected]> Signed-off-by: Alex John <[email protected]> Closes #10173
* ZTS: Fix non-portable date formatRyan Moeller2020-04-062-18/+17
| | | | | | | | | | | | | | The delegate tests use `date(1)` to generate snapshot names, using the format '%F-%T-%N' to get nanosecond resolution (since multiple snapshots may be taken in the same second). '%N' is not portable, and causes tests to fail on FreeBSD. Since the only purpose these timestamps serve is to create a unique name, simply use $RANDOM instead. Reviewed-by: John Kennedy <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10170
* Add 'zfs wait' commandPaul Dagnelie2020-04-0125-11/+679
| | | | | | | | | | | | | | | | | | | | | | Add a mechanism to wait for delete queue to drain. When doing redacted send/recv, many workflows involve deleting files that contain sensitive data. Because of the way zfs handles file deletions, snapshots taken quickly after a rm operation can sometimes still contain the file in question, especially if the file is very large. This can result in issues for redacted send/recv users who expect the deleted files to be redacted in the send streams, and not appear in their clones. This change duplicates much of the zpool wait related logic into a zfs wait command, which can be used to wait until the internal deleteq has been drained. Additional wait activities may be added in the future. Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Gallagher <[email protected]> Signed-off-by: Paul Dagnelie <[email protected]> Closes #9707
* Bugfix/fix uio partial copiesFabio Scaccabarozzi2020-04-012-8/+26
| | | | | | | | | | | | | | | | In zfs_write(), the loop continues to the next iteration without accounting for partial copies occurring in uiomove_iov when copy_from_user/__copy_from_user_inatomic return a non-zero status. This results in "zfs: accessing past end of object..." in the kernel log, and the write failing. Account for partial copies and update uio struct before returning EFAULT, leave a comment explaining the reason why this is done. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: ilbsmart <[email protected]> Signed-off-by: Fabio Scaccabarozzi <[email protected]> Closes #8673 Closes #10148
* Improve ZVOL sync write performance by using a taskqMatthew Ahrens2020-03-311-44/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | == Summary == Prior to this change, sync writes to a zvol are processed serially. This commit makes zvols process concurrently outstanding sync writes in parallel, similar to how reads and async writes are already handled. The result is that the throughput of sync writes is tripled. == Background == When a write comes in for a zvol (e.g. over iscsi), it is processed by calling `zvol_request()` to initiate the operation. ZFS is expected to later call `BIO_END_IO()` when the operation completes (possibly from a different thread). There are a limited number of threads that are available to call `zvol_request()` - one one per iscsi client (unless using MC/S). Therefore, to ensure good performance, the latency of `zvol_request()` is important, so that many i/o operations to the zvol can be processed concurrently. In other words, if the client has multiple outstanding requests to the zvol, the zvol should have multiple outstanding requests to the storage hardware (i.e. issue multiple concurrent `zio_t`'s). For reads, and async writes (i.e. writes which can be acknowledged before the data reaches stable storage), `zvol_request()` achieves low latency by dispatching the bulk of the work (including waiting for i/o to disk) to a taskq. The taskq callback (`zvol_read()` or `zvol_write()`) blocks while waiting for the i/o to disk to complete. The `zvol_taskq` has 32 threads (by default), so we can have up to 32 concurrent i/os to disk in service of requests to zvols. However, for sync writes (i.e. writes which must be persisted to stable storage before they can be acknowledged, by calling `zil_commit()`), `zvol_request()` does not use `zvol_taskq`. Instead it blocks while waiting for the ZIL write to disk to complete. This has the effect of serializing sync writes to each zvol. In other words, each zvol will only process one sync write at a time, waiting for it to be written to the ZIL before accepting the next request. The same issue applies to FLUSH operations, for which `zvol_request()` calls `zil_commit()` directly. == Description of change == This commit changes `zvol_request()` to use `taskq_dispatch_ent(zvol_taskq)` for sync writes, and FLUSh operations. Therefore we can have up to 32 threads (the taskq threads) simultaneously calling `zil_commit()`, for a theoretical performance improvement of up to 32x. To avoid the locking issue described in the comment (which this commit removes), we acquire the rangelock from the taskq callback (e.g. `zvol_write()`) rather than from `zvol_request()`. This applies to all writes (sync and async), reads, and discard operations. This means that multiple simultaneously-outstanding i/o's which access the same block can complete in any order. This was previously thought to be incorrect, but a review of the block device interface requirements revealed that this is fine - the order is inherently not defined. The shorter hold time of the rangelock should also have a slight performance improvement. For an additional slight performance improvement, we use `taskq_dispatch_ent()` instead of `taskq_dispatch()`, which avoids a `kmem_alloc()` and eliminates a failure mode. This applies to all writes (sync and async), reads, and discard operations. == Performance results == We used a zvol as an iscsi target (server) for a Windows initiator (client), with a single connection (the default - i.e. not MC/S). We used `diskspd` to generate a workload with 4 threads, doing 1MB writes to random offsets in the zvol. Without this change we get 231MB/s, and with the change we get 728MB/s, which is 3.15x the original performance. We ran a real-world workload, restoring a MSSQL database, and saw throughput 2.5x the original. We saw more modest performance wins (typically 1.5x-2x) when using MC/S with 4 connections, and with different number of client threads (1, 8, 32). Reviewed-by: Tony Nguyen <[email protected]> Reviewed-by: Pavel Zakharov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10163
* Reset l2ad_hand and l2ad_first in l2arc_evictGeorge Amanakis2020-03-316-27/+181
| | | | | | | | | | | | | | | | | | | | Increasing l2arc_write_size or l2arc_write_boost can result in l2arc_write_buffers() not having enough space to perform its writes and panic zio_write_phys(). Instead of resetting l2ad_hand to l2ad_start at the end of l2arc_write_buffers() and not taking into account a possible user-mediated increase of l2arc_write_max, we do this in l2arc_evict(), right after l2arc_write_size() has run. If there is not enough space to evict (ie we will exceed l2ad_end) we evict to the end of the device, reset l2ad_hand to l2ad_start, set l2ad_first to 0 and iterate l2arc_evict(). We avoid infinite iteration of l2arc_evict() by making sure in l2arc_write_size() that l2ad_start + size does not exceed l2ad_end. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: George Amanakis <[email protected]> Closes #10154
* ZTS: Skip udev actions in zvol_misc when not LinuxRyan Moeller2020-03-311-3/+3
| | | | | | | | | | | udev is only used on Linux. Skip udev_wait and udev_cleanup when not on Linux. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10165
* Let default arc_c_max be platform dependentRyan Moeller2020-03-274-8/+22
| | | | | | | | | | | | Linux changed the default max ARC size to 1/2 of physical memory to deal with shortcomings of the Linux SLUB allocator. Other platforms do not require the same logic. Implement an arc_default_max() function to determine a default max ARC size in platform code. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10155
* Compile cityhash code into libzfsMatthew Ahrens2020-03-2710-6/+11
| | | | | | | | | Make the cityhash code compile into libzfs, in preparation for the new "zstream" command. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10152
* ZTS: Wait for free space between quota testsRyan Moeller2020-03-265-7/+15
| | | | | | | | And in removal tests, sync the specific pool we are waiting on. Reviewed-by: John Kennedy <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10146
* Remove checks for null out value in encryption pathsDirkjan Bussink2020-03-2611-219/+106
| | | | | | | | | | | These paths are never exercised, as the parameters given are always different cipher and plaintext `crypto_data_t` pointers. Reviewed-by: Richard Laager <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Attila Fueloep <[email protected]> Signed-off-by: Dirkjan Bussink <[email protected]> Closes #9661 Closes #10015
* zfs_get: change time format string from %k to %Halex2020-03-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Issue #10090 reported that snapshots created between midnight and 1 AM are missing a padded zero in the creation property This change fixes the bug reported in issue #10090 where snapshots created between midnight and 1 AM were missing a padded zero in the creation timestamp output. The leading zero was missing because the time format string used `%k` which formats the hour as a decimal number from 0 to 23 where single digits are preceded by blanks[0] and is fixed by changing it to `%H` which formats the hour as 00-23. The difference in output is as below ``` -Thu Mar 26 0:39 2020 +Thu Mar 26 00:39 2020 ``` Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Igor Kozhukhov <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Alex John <[email protected]> Closes #10090 Closes #10153
* Deprecate deduplicated send streamsMatthew Ahrens2020-03-185-0/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dedup send can only deduplicate over the set of blocks in the send command being invoked, and it does not take advantage of the dedup table to do so. This is a very common misconception among not only users, but developers, and makes the feature seem more useful than it is. As a result, many users are using the feature but not getting any benefit from it. Dedup send requires a nontrivial expenditure of memory and CPU to operate, especially if the dataset(s) being sent is (are) not already using a dedup-strength checksum. Dedup send adds developer burden. It expands the test matrix when developing new features, causing bugs in released code, and delaying development efforts by forcing more testing to be done. As a result, we are deprecating the use of `zfs send -D` and receiving of such streams. This change adds a warning to the man page, and also prints the warning whenever dedup send or receive are used. In a future release, we plan to: 1. remove the kernel code for generating deduplicated streams 2. make `zfs send -D` generate regular, non-deduplicated streams 3. remove the kernel code for receiving deduplicated streams 4. make `zfs receive` of deduplicated streams process them in userland to "re-duplicate" them, so that they can still be received. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #7887 Closes #10117
* Avoid core dump on invalid redaction bookmarkRyan Moeller2020-03-184-25/+51
| | | | | | | | | | | | | | | | libzfs aborts and dumps core on EINVAL from the kernel when trying to do a redacted send with a bookmark that is not a redaction bookmark. Move redacted bookmark validation into libzfs. Check if the bookmark given for redactions is actually a redaction bookmark. Print an error message and exit gracefully if it is not. Don't abort on EINVAL in zfs_send_one. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10138
* Changed decimals to integers in the arcstat scriptAvatat2020-03-181-13/+4
| | | | | | | | | | | | | | Changed interval value type from decimal to integer, because of deprecation warning in Python 3.8 and above. Also changed kstat values type from decimal to integer, because all the values are integers. Fixed behavior of arcstat when run without args. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Bartosz Zieba <[email protected]> Closes #10132 Closes #10142
* Fix zfs_rmnode() unlink / rollback issueBrian Behlendorf2020-03-181-3/+9
| | | | | | | | | | If a has rollback has occurred while a file is open and unlinked. Then when the file is closed post rollback it will not exist in the rolled back version of the unlinked object. Therefore, the call to zap_remove_int() may correctly return ENOENT and should be allowed. Signed-off-by: Brian Behlendorf <[email protected]> Closes #6812 Closes #9739
* Fix cstyle warningsBrian Behlendorf2020-03-172-2/+3
| | | | | | | Fix minor cstyle warnings accidentally introduced by 7145123b. Reviewed-by: Paul Dagnelie <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #10143
* Separate warning for incomplete and corrupt streamsPaul Dagnelie2020-03-178-11/+24
| | | | | | | | | | This change adds a separate return code to zfs_ioc_recv that is used for incomplete streams, in addition to the existing return code for streams that contain corruption. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Paul Dagnelie <[email protected]> Closes #10122
* ICP: gcm-avx: Support architectures lacking the MOVBE instructionAttila Fülöp2020-03-173-18/+389
| | | | | | | | | | | | | | | | | There are a couple of x86_64 architectures which support all needed features to make the accelerated GCM implementation work but the MOVBE instruction. Those are mainly Intel Sandy- and Ivy-Bridge and AMD Bulldozer, Piledriver, and Steamroller. By using MOVBE only if available and replacing it with a MOV followed by a BSWAP if not, those architectures now benefit from the new GCM routines and performance is considerably better compared to the original implementation. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Adam D. Moss <[email protected]> Signed-off-by: Attila Fülöp <[email protected]> Followup #9749 Closes #10029
* Add option for forcible unmounting dataset while receiving snapshot.Mariusz Zaborski2020-03-177-14/+113
| | | | | | | | | | | | | | | | | | | | | | | | Currently when the dataset is in use we can't receive snapshots. zfs send test/1@asd | zfs recv -FM test/2 cannot unmount '/test/2': Device busy This commits add option 'M' which attempts to forcibly unmount the dataset. Thanks to this we can enforce receiving snapshots in a single step. Note that this functionality is not supported on Linux because the VFS will prevent active mounted filesystems from being unmounted, even with the force option. This is the intended VFS behavior. Test cases were added to verify the expected behavior based on the platform. Discussed-with: Pawel Jakub Dawidek <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Allan Jude <[email protected]> External-issue: https://reviews.freebsd.org/D22306 Closes #9904
* ZTS: Use default_cleanup_noexit where neededRyan Moeller2020-03-173-3/+7
| | | | | | | | | And add log_pass appropriately. Reviewed-by: John Kennedy <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10136
* Exit status 256+signum is actually baked in to kshRyan Moeller2020-03-171-9/+3
| | | | | | | | | | | | | | While #10121 did fix the signal numbers for FreeBSD/Darwin, it incorrectly changed the expected encoding of exit status for commands that exited on a signal. The encoding 256+signum is a feature of the shell. Only the signal numbers themselves are platform-dependent. Always use the encoding 256+signum when checking exit status for signal exits. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10137
* libzfs: Fix bounds checks for float parsingRyan Moeller2020-03-162-2/+12
| | | | | | | | | | UINT64_MAX is not exactly representable as a double. The closest representation is UINT64_MAX + 1, so we can use a >= comparison instead of > for the bounds check. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10127
* Improve zfs receive performance by batching writesMatthew Ahrens2020-03-162-51/+182
| | | | | | | | | | | | | | | | | | | | | | | | | For each WRITE record in the stream, `zfs receive` creates a DMU transaction (`dmu_tx_create()`) and writes this block's data into the object. If per-block overheads (as opposed to per-byte overheads) dominate performance (as is often the case with small recordsize), the per-dmu-transaction overheads can be significant. For example, in some workloads the `receieve_writer` thread is 100% on CPU, and more than half of its CPU time is in these per-tx routines (e.g. dmu_tx_hold_write, dmu_tx_assign, dmu_tx_commit). To improve performance of `zfs receive`, this commit batches WRITE records which are to nearby offsets of the same object, and uses one DMU transaction to write them all. By default the batch size is 1MB, which for recordsize=8K reduces the number of DMU transactions by 128x for full send streams (incrementals will depend on how "clumpy" the changed blocks are). This commit improves the performance of `dd if=stream | zfs recv` from 78,800 blocks/sec to 98,100 blocks/sec (25% improvement). Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10099
* Remove CI builder customization from TESTBrian Behlendorf2020-03-161-61/+0
| | | | | | | | | | | | | | The default options are reasonable for all of the CI builders. * TEST_XFSTESTS_SKIP=yes - This is already the default. * TEST_ZTEST_TIMEOUT=3600 - Increased ztest run time only increases code coverage by a small degree. Default 900s runs are sufficient. * Disabling certain tests on 32-bit builders is no longer needed. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Kjeld Schouten <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #10129
* ZTS: Update flaky tests in zts-reportRyan Moeller2020-03-131-7/+20
| | | | | | | | | | | | | | Some tests which pass on FreeBSD but fail on Linux had been put in the "maybe" set. Move these back to "known" under an "if Linux" check so the expected outcome is clear. Add some tests that have been found to be flaky on FreeBSD stable/12 to the "maybe" set. Reviewed-by: John Kennedy <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10120
* dmu_objset_from_ds must be called with dp_config_rwlock heldMatthew Ahrens2020-03-127-115/+101
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The normal lock order is that the dp_config_rwlock must be held before the ds_opening_lock. For example, dmu_objset_hold() does this. However, dmu_objset_open_impl() is called with the ds_opening_lock held, and if the dp_config_rwlock is not already held, it will attempt to acquire it. This may lead to deadlock, since the lock order is reversed. Looking at all the callers of dmu_objset_open_impl() (which is principally the callers of dmu_objset_from_ds()), almost all callers already have the dp_config_rwlock. However, there are a few places in the send and receive code paths that do not. For example: dsl_crypto_populate_key_nvlist, send_cb, dmu_recv_stream, receive_write_byref, redact_traverse_thread. This commit resolves the problem by requiring all callers ot dmu_objset_from_ds() to hold the dp_config_rwlock. In most cases, the code has been restructured such that we call dmu_objset_from_ds() earlier on in the send and receive processes, when we already have the dp_config_rwlock, and save the objset_t until we need it in the middle of the send or receive (similar to what we already do with the dsl_dataset_t). Thus we do not need to acquire the dp_config_rwlock in many new places. I also cleaned up code in dmu_redact_snap() and send_traverse_thread(). Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Paul Zuchowski <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #9662 Closes #10115
* Fix infinite scan on a pool with only special allocationsAlexander Motin2020-03-121-3/+6
| | | | | | | | | | | | | | Attempt to run scrub or resilver on a new pool containing only special allocations (special vdev added on creation) caused infinite loop because of dsl_scan_should_clear() limiting memory usage to 5% of pool size, which it calculated accounting only normal allocation class. Addition of special and just in case dedup classes fixes the issue. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored-By: iXsystems, Inc. Closes #10106 Closes #8694
* ZTS: Use correct signal numbers for status checksRyan Moeller2020-03-121-6/+29
| | | | | | | | | | | | Different operating systems encode exit status in different ways. The logapi shell library assumes the Solaris meaning of exit codes, which is not correct on other platforms. Define the needed constants according to the platform we are running on and use those to decode process exit status. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10121
* ZTS: Test boundary conditions in alloc_class_012Ryan Moeller2020-03-122-39/+57
| | | | | | | | | | | | | | | Issue #9142 describes an error in the checks for device removal that can prevent removal of special allocation class vdevs in some situations. Enhance alloc_class/alloc_class_012_pos to check situations where this bug occurs. Update zts-report with knowledge of issue #9142. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10116 Issue #9142
* ZTS: Wait for free space between write_dirs testsRyan Moeller2020-03-124-1/+8
| | | | | | | | | | | | | | | | | Cleanup for write_dirs involves destroying a dataset filling a pool and then recreating the dataset for the next test. Due to the asynchronous nature of free space accounting, recreating the dataset can fail for lack of space, causing problems for the next test. Add wait_freeing $TESTPOOL to wait for the space to be freed and then sync_pool $TESTPOOL to update the space accounting before attempting to recreate the test filesystem. Only use a single disk to create the pool. Make it a small file so it does not take too long to fill. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10112
* Prevent race condition in dnode_dest (#10101)John Poduska2020-03-123-6/+15
| | | | | | | | | | | | | | | | | | | | | | | dnode_special_close() waits for the refcount of dn_holds to go to zero without holding the dn_mtx. dnode_rele_and_unlock() does the final remove to dn_holds with dn_mtx being held: refs = zfs_refcount_remove(&dn->dn_holds, tag); mutex_exit(&dn->dn_mtx); So, there is a race condition after the remove until dn_mtx is dropped. During that time, dnode_destroy() can get called, which ends up in dnode_dest() calling mutex_destroy() and a panic since the lock is still held. This change adds a condvar to wait for the final dnode_rele_and_unlock() to release the dn_mtx before calling dnode_destroy(). Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: John Poduska <[email protected]> Closes #7814 Closes #10101
* Prevent deadlock in arc_read in Linux memory reclaim callbackMark Roper2020-03-121-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using zfs with Lustre, an arc_read can trigger kernel memory allocation that in turn leads to a memory reclaim callback and a deadlock within a single zfs process. This change uses spl_fstrans_mark and spl_trans_unmark to prevent the reclaim attempt and the deadlock (https://zfsonlinux.topicbox.com/groups/zfs-devel/T4db2c705ec1804ba). The stack trace observed is: __schedule at ffffffff81610f2e schedule at ffffffff81611558 schedule_preempt_disabled at ffffffff8161184a __mutex_lock at ffffffff816131e8 arc_buf_destroy at ffffffffa0bf37d7 [zfs] dbuf_destroy at ffffffffa0bfa6fe [zfs] dbuf_evict_one at ffffffffa0bfaa96 [zfs] dbuf_rele_and_unlock at ffffffffa0bfa561 [zfs] dbuf_rele_and_unlock at ffffffffa0bfa32b [zfs] osd_object_delete at ffffffffa0b64ecc [osd_zfs] lu_object_free at ffffffffa06d6a74 [obdclass] lu_site_purge_objects at ffffffffa06d7fc1 [obdclass] lu_cache_shrink_scan at ffffffffa06d81b8 [obdclass] shrink_slab at ffffffff811ca9d8 shrink_node at ffffffff811cfd94 do_try_to_free_pages at ffffffff811cfe63 try_to_free_pages at ffffffff811d01c4 __alloc_pages_slowpath at ffffffff811be7f2 __alloc_pages_nodemask at ffffffff811bf3ed new_slab at ffffffff81226304 ___slab_alloc at ffffffff812272ab __slab_alloc at ffffffff8122740c kmem_cache_alloc at ffffffff81227578 spl_kmem_cache_alloc at ffffffffa048a1fd [spl] arc_buf_alloc_impl at ffffffffa0befba2 [zfs] arc_read at ffffffffa0bf0924 [zfs] dbuf_read at ffffffffa0bf9083 [zfs] dmu_buf_hold_by_dnode at ffffffffa0c04869 [zfs] Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Mark Roper <[email protected]> Closes #9987
* zloop.sh should call ZDB with pool nameOlaf Faaland2020-03-111-1/+1
| | | | | | | | | | | Commit 54007c79 introduced an error, changing the final argument to $ZDB from ztest to $ZTEST. This argument indicates the pool name, not the script, and so should not have been changed. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Olaf Faaland <[email protected]> Closes #10118
* ZTS: Add a failsafe callback to run after each testRyan Moeller2020-03-107-48/+145
| | | | | | | | | | | | | | | | | | | Tests that get killed do not have an opportunity to clean up. There are many bad states this can leave the system in, but of particular gravity is when zinject has been used to induce bad behavior for one or more of the test disks. Create a failsafe mechanism in test-runner.py that runs a callback script after every test. The script is common to all tests so all tests benefit from the protection. Add an obligatory `zinject -c all` to clear all zinject state after every test case is run. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10096
* Improve zfs send performance by bypassing the ARCMatthew Ahrens2020-03-104-151/+235
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When doing a zfs send on a dataset with small recordsize (e.g. 8K), performance is dominated by the per-block overheads. This is especially true with `zfs send --compressed`, which further reduces the amount of data sent, for the same number of blocks. Several threads are involved, but the limiting factor is the `send_prefetch` thread, which is 100% on CPU. The main job of the `send_prefetch` thread is to issue zio's for the data that will be needed by the main thread. It does this by calling `arc_read(ARC_FLAG_PREFETCH)`. This has an immediate cost of creating an arc_hdr, which takes around 14% of one CPU. It also induces later costs by other threads: * Since the data was only prefetched, dmu_send()->dmu_dump_write() will need to call arc_read() again to get the data. This will have to look up the arc_hdr in the hash table and copy the data from the scatter ABD in the arc_hdr to a linear ABD in arc_buf. This takes 27% of one CPU. * dmu_dump_write() needs to arc_buf_destroy() This takes 11% of one CPU. * arc_adjust() will need to evict this arc_hdr, taking about 50% of one CPU. All of these costs can be avoided by bypassing the ARC if the data is not already cached. This commit changes `zfs send` to check for the data in the ARC, and if it is not found then we directly call `zio_read()`, reading the data into a linear ABD which is used by dmu_dump_write() directly. The performance improvement is best expressed in terms of how many blocks can be processed by `zfs send` in one second. This change increases the metric by 50%, from ~100,000 to ~150,000. When the amount of data per block is small (e.g. 2KB), there is a corresponding reduction in the elapsed time of `zfs send >/dev/null` (from 86 minutes to 58 minutes in this test case). In addition to improving the performance of `zfs send`, this change makes `zfs send` not pollute the ARC cache. In most cases the data will not be reused, so this allows us to keep caching useful data in the MRU (hit-once) part of the ARC. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #10067
* ZTS: Simplify some libtest functionsRyan Moeller2020-03-101-30/+9
| | | | | | | | | | | | | Don't echo the results of arithmetic expressions, it's not necessary. Use hw.clockrate sysctl to get CPU freq instead of parsing dmesg.boot for a line that might not even be there anymore. Reduce bookkeeping in fill_fs, making it easier to follow. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10113
* Fix zfs-functions packaging bugRichard Laager2020-03-1012-27/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a bug where the generated zfs-functions was being included along with original zfs-functions.in in the make dist tarball. This caused an unfortunate series of events during build/packaging that resulted in the RPM-installed /etc/zfs/zfs-functions listing the paths as: ZFS="/usr/local/sbin/zfs" ZED="/usr/local/sbin/zed" ZPOOL="/usr/local/sbin/zpool" When they should have been: ZFS="/sbin/zfs" ZED="/sbin/zed" ZPOOL="/sbin/zpool" This affects init.d (non-systemd) distros like CentOS 6. /etc/default/zfs and /etc/zfs/zfs-functions are also used by the initramfs, so they need to be built even when init.d support is not. They have been moved to the (new) etc/default and (existing) etc/zfs source directories, respectively. Fixes: #9443 Co-authored-by: Tony Hutter <[email protected]> Signed-off-by: Richard Laager <[email protected]>
* initramfs: Eliminate substitutionsRichard Laager2020-03-102-26/+2
| | | | | | | These are now handled in zfs-functions, so this is all duplicative and unnecessary. Signed-off-by: Richard Laager <[email protected]>
* Delete built init scripts in make cleanRichard Laager2020-03-101-3/+1
| | | | | | | | Previously, they were being deleted in make distclean. This brings it in line with the example: https://www.gnu.org/software/automake/manual/html_node/Scripts.html Signed-off-by: Richard Laager <[email protected]>
* Make init scripts depend on MakefileRichard Laager2020-03-101-1/+1
| | | | | | | | | This brings it in line with the example: https://www.gnu.org/software/automake/manual/html_node/Scripts.html This way, if the substitution code is changed, they should update. Signed-off-by: Richard Laager <[email protected]>
* Systemd mount generator: don't fail keyload from file if already loadedInsanePrawn2020-03-091-7/+11
| | | | | | | | | | | | | Previously the generated keyload units for encryption roots with keylocation=file://* didn't contain the code to detect if the key was already loaded and would be marked failed in such situations. Move the code to check whether the key is already loaded from keylocation=prompt handling to general key loading code. Reviewed-by: Richard Laager <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: InsanePrawn <[email protected]> Closes #10103
* ZTS: Another round of changes for FreeBSDRyan Moeller2020-03-0611-59/+82
| | | | | | | | | | | | | | | | | Highlights: * is_linux -> is_illumos swaps * make block_device_wait more clever when paths are given * slightly optimize default_cleanup_noexit * remove platform differences in user_run * temporarily expect non-libfetch behavior for keylocation=/foo/bar * fix sharenfs exceptions * don't test multihost property * fix misc broken platform checks * clear zinjected faults in removal_resume_export callback Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: John Kennedy <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #10092