aboutsummaryrefslogtreecommitdiffstats
path: root/module
Commit message (Collapse)AuthorAgeFilesLines
* Remove unused check from dmu_tx_count_write()Brian Behlendorf2020-12-211-3/+0
| | | | | | | | | | | | | | Individual transactions may not be larger than DMU_MAX_ACCESS. This is enforced by the assertions in dmu_tx_hold_write() and dmu_tx_hold_write_by_dnode(). There's an additional check in dmu_tx_count_write() however it has no effect and only sets a local err variable. We could enable this check, however since it's already enforced by ASSERTs elsewhere I opted to remove it instead. Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #3731 Closes #11384
* Dangling reference from dmu_objset_upgradeAndy Fiddaman2020-12-211-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | After porting the fix for https://github.com/openzfs/zfs/issues/5295 over to illumos, we started hitting an assertion failure when running the testsuite: assertion failed: rc->rc_count == number, file: .../refcount.c and the unexpected hold has this stack: dsl_dataset_long_hold+0x59 dmu_objset_upgrade+0x73 dmu_objset_id_quota_upgrade+0x15 dmu_objset_own+0x14f The simplest reproducer for this in illumos is zpool create -f -O version=1 testpool c3t0d0; zpool destroy testpool which is run as part of the zpool_create_tempname test, but I can't get this to trigger on FreeBSD. This appears to be because of the call to txg_wait_synced() in dmu_objset_upgrade_stop() (which was missing in illumos), slows down dmu_objset_disown() enough to avoid the condition. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Andy Fiddaman <[email protected]> Closes #11368
* Fix maybe uninitialized variable warningBrian Behlendorf2020-12-201-1/+1
| | | | | | | | | | | | | Commit 1c2358c12 restructured this code and introduced a warning about the variable maybe not being initialized. This cannot happen with the updated code but we should initialize the variable anyway to silence the warning. zpl_file.c: In function ‘zpl_iter_write’: zpl_file.c:324:9: warning: ‘count’ may be used uninitialized in this function [-Wmaybe-uninitialized] Signed-off-by: Brian Behlendorf <[email protected]> Closes #11373
* Remove iov_iter_advance() from iter_readBrian Behlendorf2020-12-201-3/+0
| | | | | | | | | | | | | There's no need to call iov_iter_advance() in zpl_iter_read(). This was preserved from the previous code where it wasn't needed but also didn't cause any problems. Now that the iter functions also handle pipes that's no longer the case. When fully reading a pipe buffer iov_iter_advance() may results in the pipe buf release function being called which will not be registered resulting in a NULL dereference. Signed-off-by: Brian Behlendorf <[email protected]> Closes #11375 Closes #11378
* dsl_pool: extend comment on DSL Pool Configuration LockChristian Schwarz2020-12-191-2/+29
| | | | | | | | Based on a conversation with Matt on the OpenZFS Slack. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Christian Schwarz <[email protected]> Closes #11370
* Linux 5.10 compat: also zvol_revalidate_disk()Michael D Labriola2020-12-181-2/+3
| | | | | | | | | | | | Commit 59b68723 added a configure check for 5.10, which removed revalidate_disk(), and conditionally replaced it's usage with a call to the new revalidate_disk_size() function. However, the old function also invoked the device's registered callback, in our case zvol_revalidate_disk(). This commit adds a call to zvol_revalidate_disk() in zvol_update_volsize() to make sure the code path stays the same. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Michael D Labriola <[email protected]> Closes #11358
* Linux 5.10 compat: use iov_iter in uio structureBrian Behlendorf2020-12-189-307/+300
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of the 5.10 kernel the generic splice compatibility code has been removed. All filesystems are now responsible for registering a ->splice_read and ->splice_write callback to support this operation. The good news is the VFS provided generic_file_splice_read() and iter_file_splice_write() callbacks can be used provided the ->iter_read and ->iter_write callback support pipes. However, this is currently not the case and only iovecs and bvecs (not pipes) are ever attached to the uio structure. This commit changes that by allowing full iov_iter structures to be attached to uios. Ever since the 4.9 kernel the iov_iter structure has supported iovecs, kvecs, bvevs, and pipes so it's desirable to pass the entire thing when possible. In conjunction with this the uio helper functions (i.e uiomove(), uiocopy(), etc) have been updated to understand the new UIO_ITER type. Note that using the kernel provided uio_iter interfaces allowed the existing Linux specific uio handling code to be simplified. When there's no longer a need to support kernel's older than 4.9, then it will be possible to remove the iovec and bvec members from the uio structure and always use a uio_iter. Until then we need to maintain all of the existing types for older kernels. Some additional refactoring and cleanup was included in this change: - Added checks to configure to detect available iov_iter interfaces. Some are available all the way back to the 3.10 kernel and are used when available. In particular, uio_prefaultpages() now always uses iov_iter_fault_in_readable() which is available for all supported kernels. - The unused UIO_USERISPACE type has been removed. It is no longer needed now that the uio_seg enum is platform specific. - Moved zfs_uio.c from the zcommon.ko module to the Linux specific platform code for the zfs.ko module. This gets it out of libzfs where it was never needed and keeps this Linux specific code out of the common sources. - Removed unnecessary O_APPEND handling from zfs_iter_write(), this is redundant and O_APPEND is already handled in zfs_write(); Reviewed-by: Colin Ian King <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11351
* special device removal space accounting fixesMatthew Ahrens2020-12-172-31/+40
| | | | | | | | | | | | | | | | The space in special devices is not included in spa_dspace (or dsl_pool_adjustedsize(), or the zfs `available` property). Therefore there is always at least as much free space in the normal class, as there is allocated in the special class(es). And therefore, there is always enough free space to remove a special device. However, the checks for free space when removing special devices did not take this into account. This commit corrects that. Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Don Brady <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #11329
* Avoid extra work updating ARC kstats and tunablesRyan Moeller2020-12-171-16/+9
| | | | | | | | | | | After e357046 it should not be necessary to periodically update ARC kstats and tunables. Tunable updates are applied when modified, and kstats are updated on demand. Update kstats in `arc_evict_cb_check()` for `ZFS_DEBUG` builds only. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11237
* Only examine best metaslabs on each vdev Matthew Ahrens2020-12-162-55/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a system with very high fragmentation, we may need to do lots of gang allocations (e.g. most indirect block allocations (~50KB) may need to gang). Before failing a "normal" allocation and resorting to ganging, we try every metaslab. This has the impact of loading every metaslab (not a huge deal since we now typically keep all metaslabs loaded), and also iterating over every metaslab for every failing allocation. If there are many metaslabs (more than the typical ~200, e.g. due to vdev expansion or very large vdevs), the CPU cost of this iteration can be very impactful. This iteration is done with the mg_lock held, creating long hold times and high lock contention for concurrent allocations, ultimately causing long txg sync times and poor application performance. To address this, this commit changes the behavior of "normal" (not try_hard, not ZIL) allocations. These will now only examine the 100 best metaslabs (as determined by their ms_weight). If none of these have a large enough free segment, then the allocation will fail and we'll fall back on ganging. To accomplish this, we will now (normally) gang before doing a `try_hard` allocation. Non-try_hard allocations will only examine the 100 best metaslabs of each vdev. In summary, we will first try normal allocation. If that fails then we will do a gang allocation. If that fails then we will do a "try hard" gang allocation. If that fails then we will have a multi-layer gang block. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #11327
* Make metaslab class rotor and aliquot per-allocator.Alexander Motin2020-12-154-76/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | Metaslab rotor and aliquot are used to distribute workload between vdevs while keeping some locality for logically adjacent blocks. Once multiple allocators were introduced to separate allocation of different objects it does not make much sense for different allocators to write into different metaslabs of the same metaslab group (vdev) same time, competing for its resources. This change makes each allocator choose metaslab group independently, colliding with others only sporadically. Test including simultaneous write into 4 files with recordsize of 4KB on a striped pool of 30 disks on a system with 40 logical cores show reduction of vdev queue lock contention from 54 to 27% due to better load distribution. Unfortunately it won't help much ZVOLs yet since only one dataset/ZVOL is synced at a time, and so for the most part only one allocator is used, but it may improve later. While there, to reduce the number of pointer dereferences change per-allocator storage for metaslab classes and groups from several separate malloc()'s to variable length arrays at the ends of the original class and group structures. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Closes #11288
* lua: avoid gcc -Wreturn-local-addr bugRyan Libby2020-12-151-4/+6
| | | | | | | | | | | | | | | Avoid a bug with gcc's -Wreturn-local-addr warning with some obfuscation. In buggy versions of gcc, if a return value is an expression that involves the address of a local variable, and even if that address is legally converted to a non-pointer type, a warning may be emitted and the value of the address may be replaced with zero. Howerver, buggy versions don't emit the warning or replace the value when simply returning a local variable of non-pointer type. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90737 Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Libby <[email protected]> Closes #11337
* dmu_zfetch: fix memory leakMatthew Macy2020-12-121-4/+3
| | | | | | | | | | The last change caused the read completion callback to not be called if the IO was still in progress. This change restores allocation of the arc buf callback, but in the callback path checks the new acb_nobuf field to know to skip buffer allocation. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matt Macy <[email protected]> Closes #11324
* Fix reporting of CKSUM errors in indirect vdevsGeorge Amanakis2020-12-111-9/+21
| | | | | | | | | | | | | When removing and subsequently reattaching a vdev, CKSUM errors may occur as vdev_indirect_read_all() reads from all children of a mirror in case of a resilver. Fix this by checking whether a child is missing the data and setting a flag (ic_error) which is then checked in vdev_indirect_repair() and suppresses incrementing the checksum counter. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: George Amanakis <[email protected]> Closes #11277
* FreeBSD: Implement sysctl for fletcher4 implRyan Moeller2020-12-111-8/+59
| | | | | | | | | | | | There is a tunable to select the fletcher 4 checksum implementation on Linux but it was not present in FreeBSD. Implement the sysctl handler for FreeBSD and use ZFS_MODULE_PARAM_CALL to provide the tunable on both platforms. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11270
* Improve zfs receive performance with lightweight writeMatthew Ahrens2020-12-118-265/+512
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The performance of `zfs receive` can be bottlenecked on the CPU consumed by the `receive_writer` thread, especially when receiving streams with small compressed block sizes. Much of the CPU is spent creating and destroying dbuf's and arc buf's, one for each `WRITE` record in the send stream. This commit introduces the concept of "lightweight writes", which allows `zfs receive` to write to the DMU by providing an ABD, and instantiating only a new type of `dbuf_dirty_record_t`. The dbuf and arc buf for this "dirty leaf block" are not instantiated. Because there is no dbuf with the dirty data, this mechanism doesn't support reading from "lightweight-dirty" blocks (they would see the on-disk state rather than the dirty data). Since the dedup-receive code has been removed, `zfs receive` is write-only, so this works fine. Because there are no arc bufs for the received data, the received data is no longer cached in the ARC. Testing a receive of a stream with average compressed block size of 4KB, this commit improves performance by 50%, while also reducing CPU usage by 50% of a CPU. On a per-block basis, CPU consumed by receive_writer() and dbuf_evict() is now 1/7th (14%) of what it was. Baseline: 450MB/s, CPU in receive_writer() 40% + dbuf_evict() 35% New: 670MB/s, CPU in receive_writer() 17% + dbuf_evict() 0% The code is also restructured in a few ways: Added a `dr_dnode` field to the dbuf_dirty_record_t. This simplifies some existing code that no longer needs `DB_DNODE_ENTER()` and related routines. The new field is needed by the lightweight-type dirty record. To ensure that the `dr_dnode` field remains valid until the dirty record is freed, we have to ensure that the `dnode_move()` doesn't relocate the dnode_t. To do this we keep a hold on the dnode until it's zio's have completed. This is already done by the user-accounting code (`userquota_updates_task()`), this commit extends that so that it always keeps the dnode hold until zio completion (see `dnode_rele_task()`). `dn_dirty_txg` was previously zeroed when the dnode was synced. This was not necessary, since its meaning can be "when was this dnode last dirtied". This change simplifies the new `dnode_rele_task()` code. Removed some dead code related to `DRR_WRITE_BYREF` (dedup receive). Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: George Wilson <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #11105
* Fix kernel panic induced by redacted sendPaul Dagnelie2020-12-111-61/+53
| | | | | | | | | | | | | | | | | In the redaction list traversal code, there is a bug in the binary search logic when looking for the resume point. Maxbufid can be decremented to -1, causing us to read the last possible block of the object instead of the one we wanted. This can cause incorrect resume behavior, or possibly even a hang in some cases. In addition, when examining non-last blocks, we can treat the block as being the same size as the last block, causing us to miss entries in the redaction list when determining where to resume. Finally, we were ignoring the case where the resume point was found in the buffer being searched, and resuming from minbufid. All these issues have been corrected, and the code has been significantly simplified to make future issues less likely. Reviewed-by: Serapheim Dimitropoulos <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Paul Dagnelie <[email protected]> Closes #11297
* FreeBSD: Fix format of vfs.zfs.arc_no_grow_shiftRyan Moeller2020-12-101-6/+5
| | | | | | | | | | | | | vfs.zfs.arc_no_grow_shift has an invalid type (15) and this causes py-sysctl to format it as a bytearray when it should be an integer. "U" is not a valid format, it should be "I" and the type should match the variable type, int. We can return EINVAL if the value is set below zero. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11318
* Fix possibly uninitialized 'root_inode' variable warningBrian Behlendorf2020-12-101-1/+1
| | | | | | | | | | | | Resolve an uninitialized variable warning when compiling. In function ‘zfs_domount’: warning: ‘root_inode’ may be used uninitialized in this function [-Wmaybe-uninitialized] sb->s_root = d_make_root(root_inode); Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11306
* Implement memory and CPU hotplugPaul Dagnelie2020-12-109-36/+254
| | | | | | | | | | | | | | ZFS currently doesn't react to hotplugging cpu or memory into the system in any way. This patch changes that by adding logic to the ARC that allows the system to take advantage of new memory that is added for caching purposes. It also adds logic to the taskq infrastructure to support dynamically expanding the number of threads allocated to a taskq. Reviewed-by: Brian Behlendorf <[email protected]> Co-authored-by: Matthew Ahrens <[email protected]> Co-authored-by: Brian Behlendorf <[email protected]> Signed-off-by: Paul Dagnelie <[email protected]> Closes #11212
* FreeBSD: Do zcommon_init sooner to avoid FPU panicRyan Moeller2020-12-091-1/+1
| | | | | | | | | | | | | | | | | | | | | There has been a panic affecting some system configurations where the thread FPU context is disturbed during the fletcher 4 benchmarks, leading to a panic at boot. module_init() registers zcommon_init to run in the last subsystem (SI_SUB_LAST). Running it as soon as interrupts have been configured (SI_SUB_INT_CONFIG_HOOKS) makes sure we have finished the benchmarks before we start doing other things. While it's not clear *how* the FPU context was being disturbed, this does seem to avoid it. Add a module_init_early() macro to run zcommon_init() at this earlier point on FreeBSD. On Linux this is defined as module_init(). Authored by: Konstantin Belousov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11302
* Decouple arc_read_done callback from arc buf instantiationMatthew Macy2020-12-092-3/+5
| | | | | | | | | | | | Add ARC_FLAG_NO_BUF to indicate that a buffer need not be instantiated. This fixes a ~20% performance regression on cached reads due to zfetch changes. Reviewed-by: Tony Nguyen <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Signed-off-by: Matt Macy <[email protected]> Closes #11220 Closes #11232
* Fix optional "force" arg handing in zfs_ioc_pool_sync()Brian Behlendorf2020-12-091-4/+7
| | | | | | | | | | The fnvlist_lookup_boolean_value() function should not be used to check the force argument since it's optional. It may not be provided or may have been created with the wrong flags. Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11281 Closes #11284
* Reduce fletcher4 and raidz benchmark timesBrian Behlendorf2020-12-062-3/+3
| | | | | | | | | | | | | | | | | | | | During module load time all of the available fetcher4 and raidz implementations are benchmarked for a fixed amount of time to determine the fastest available. Manual testing has shown that this time can be significantly reduced with negligible effect on the final results. This commit changes the benchmark time to 1ms which can reduce the module load time by over a second on x86_64. On an x86_64 system with sse3, ssse3, and avx2 instructions the benchmark times are: Fletcher4 603ms -> 15ms RAIDZ 1,322ms -> 64ms Reviewed-by: Matthew Macy <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11282
* Avoid some spa_has_pending_synctask() calls.Alexander Motin2020-12-061-4/+2
| | | | | | | | | | | | | | | | | Since 8c4fb36a24 (PR #7795) spa_has_pending_synctask() started to take two more locks per write inside txg_all_lists_empty(). I am surprised those pool-wide locks are not contended, but still their operations are visible in CPU profiles under contended vdev lock. This commit slightly changes vdev_queue_max_async_writes() flow to not call the function if we are going to return max_active any way due to high amount of dirty data. It allows to save some CPU time exactly when the pool is busy. Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-By: Tom Caputi <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Closes #11280
* Bring consistency to ABD chunk count types.Alexander Motin2020-12-062-19/+26
| | | | | | | | | | | | | | | With both abd_size and abd_nents being uint_t it makes no sense for abd_chunkcnt_for_bytes() to return size_t. Random mix of different types used to count chunks looks bad and makes compiler more difficult to optimize the code. In particular on FreeBSD this change allows compiler to completely optimize out abd_verify_scatter() when built without debug, removing pointless 64-bit division and even more pointless empty loop. Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Closes #11279
* Fix raw sends on encrypted datasets when copying back snapshotsGeorge Amanakis2020-12-043-10/+31
| | | | | | | | | | | | | | When sending raw encrypted datasets the user space accounting is present when it's not expected to be. This leads to the subsequent mount failure due a checksum error when verifying the local mac. Fix this by clearing the OBJSET_FLAG_USERACCOUNTING_COMPLETE and reset the local mac. This allows the user accounting to be correctly updated on first mount using the normal upgrade process. Reviewed-By: Brian Behlendorf <[email protected]> Reviewed-By: Tom Caputi <[email protected]> Signed-off-by: George Amanakis <[email protected]> Closes #10523 Closes #11221
* Fix for "Reduce latency effects of non-interactive I/O"Alexander Motin2020-12-031-4/+4
| | | | | | | | | | | | | | | | | It was found that setting min_active tunables for non-interactive I/Os makes them stuck. It is caused by zfs_vdev_nia_delay, that can never be reached if we never issue any I/Os due to min_active set to zero. Fix this by issuing at least one non-interactive I/O at a time when there are no interactive I/Os. When there are interactive I/Os, zero min_active allows to completely block any non-interactive I/O. It may min_active starvation in some scenarios, but who we are to deny foot shooting? Reviewed-by: George Melikov <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Closes #11261
* FreeBSD: notify userspace when a vdev is removedRyan Moeller2020-12-021-0/+3
| | | | | | | | This is needed for zfsd to autoreplace vdevs. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11260
* Avoid unneccessary zio allocation and waitFinix19792020-12-021-11/+14
| | | | | | | | | | In function dmu_buf_hold_array_by_dnode, the usage of zio is only for the reading operation. Only create the zio and wait it in the reading scenario as a performance optimization. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Finix Yan <[email protected]> Closes #11251 Closes #11256
* Remove incorrect assertionBrian Behlendorf2020-11-241-1/+0
| | | | | | | | | | | | | | | Commit 85703f6 added a new ASSERT to zfs_write() as part of the cleanup which isn't correct in the case where multiple processes are concurrently extending a file. The `zp->z_size` is updated atomically while holding a range lock on only a portion of the file. Therefore, it's possible for the file size to increase after a same check is performed earlier in the loop causing this ASSERT to fail. The code itself handles this case correctly so only the invalid ASSERT needs to be removed. Reviewed-by: Brian Atkinson <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11235
* Reduce latency effects of non-interactive I/OAlexander Motin2020-11-241-16/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Investigating influence of scrub (especially sequential) on random read latency I've noticed that on some HDDs single 4KB read may take up to 4 seconds! Deeper investigation shown that many HDDs heavily prioritize sequential reads even when those are submitted with queue depth of 1. This patch addresses the latency from two sides: - by using _min_active queue depths for non-interactive requests while the interactive request(s) are active and few requests after; - by throttling it further if no interactive requests has completed while configured amount of non-interactive did. While there, I've also modified vdev_queue_class_to_issue() to give more chances to schedule at least _min_active requests to the lowest priorities. It should reduce starvation if several non-interactive processes are running same time with some interactive and I think should make possible setting of zfs_vdev_max_active to as low as 1. I've benchmarked this change with 4KB random reads from ZVOL with 16KB block size on newly written non-fragmented pool. On fragmented pool I also saw improvements, but not so dramatic. Below are log2 histograms of the random read latency in milliseconds for different devices: 4 2x mirror vdevs of SATA HDD WDC WD20EFRX-68EUZN0 before: 0, 0, 2, 1, 12, 21, 19, 18, 10, 15, 17, 21 after: 0, 0, 0, 24, 101, 195, 419, 250, 47, 4, 0, 0 , that means maximum latency reduction from 2s to 500ms. 4 2x mirror vdevs of SATA HDD WDC WD80EFZX-68UW8N0 before: 0, 0, 2, 31, 38, 28, 18, 12, 17, 20, 24, 10, 3 after: 0, 0, 55, 247, 455, 470, 412, 181, 36, 0, 0, 0, 0 , i.e. from 4s to 250ms. 1 SAS HDD SEAGATE ST14000NM0048 before: 0, 0, 29, 70, 107, 45, 27, 1, 0, 0, 1, 4, 19 after: 1, 29, 681, 1261, 676, 1633, 67, 1, 0, 0, 0, 0, 0 , i.e. from 4s to 125ms. 1 SAS SSD SEAGATE XS3840TE70014 before (microseconds): 0, 0, 0, 0, 0, 0, 0, 0, 70, 18343, 82548, 618 after: 0, 0, 0, 0, 0, 0, 0, 0, 283, 92351, 34844, 90 I've also measured scrub time during the test and on idle pools. On idle fragmented pool I've measured scrub getting few percent faster due to use of QD3 instead of QD2 before. On idle non-fragmented pool I've measured no difference. On busy non-fragmented pool I've measured scrub time increase about 1.5-1.7x, while IOPS increase reached 5-9x. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Matthew Ahrens <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored-By: iXsystems, Inc. Closes #11166
* FreeBSD: decouple ZFS_DEBUG from kernel debug settingsMatthew Macy2020-11-241-1/+7
| | | | | | Reviewed-by: Martelli Nikola @martellini Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Matt Macy <[email protected]> Closes #11213
* Update dRAID short feature descriptionBrian Behlendorf2020-11-231-1/+1
| | | | | | | | | The documentation describes dRAID as a distributed spare, not parity, RAID implementation. Update the short feature description to match the rest of the documentation. Reviewed-by: George Melikov <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11229
* Correct missing zil_claim() DTL updatesBrian Behlendorf2020-11-201-14/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a1d477c2 accidentally disabled DTL updates for the zil_claim() case described at the end of vdev_stat_update() by unconditionally disabling all DTL updates when loading. This was done to avoid a deadlock on the vd_dtl_lock when loading the DTLs from disk. vdev_dtl_contains <--- Takes vd->vd_dtl_lock vdev_mirror_child_missing vdev_mirror_io_start zio_vdev_io_start __zio_execute arc_read dbuf_issue_final_prefetch dbuf_prefetch_impl dbuf_prefetch dmu_prefetch space_map_iterate space_map_load_length space_map_load vdev_dtl_load <--- Takes vd->vd_dtl_lock vdev_load spa_ld_load_vdev_metadata spa_tryimport The missing DTL updates can be restored by moving the space_map_load() call outside the vd_dtl_lock. A private range tree is populated by reading the space map and then merged in to the DTL_MISSING tree under the lock. Furthermore, the SPA_LOAD_NONE check in vdev_dtl_contains() leads to an additional problem. Any resilvering which occurs before SPA_LOAD_NONE is set will incorrectly determine that there's nothing to repair. This can result in full redundancy not being restored for some blocks. Reviewed-by: Matt Ahrens <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11218
* Reduce confusion in zfs_writeRyan Moeller2020-11-181-18/+24
| | | | | | | | | | | | | Is this block when abuf != NULL ever reached? Yes, it is. Add asserts and comments to prove that when we get here, we have a full block write at an aligned offset extending past EOF. Simplify by removing the check that tx_bytes == max_blksz, since we can assert that it is always true. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11191
* Fix problems in zvol_set_volmode_implMatthew Macy2020-11-173-31/+81
| | | | | | | | | | | | | - Don't leave fstrans set when passed a snapshot - Don't remove minor if volmode already matches new value - (FreeBSD) Wait for GEOM ops to complete before trying remove (at create time GEOM will be "tasting" in parallel) - (FreeBSD) Don't leak zvol_state_lock on open if zv == NULL - (FreeBSD) Don't try to unlock zv->zv_state lock if zv == NULL Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matt Macy <[email protected]> Closes #11199
* Fix 'zfs userspace' for received datasets in encrypted rootloli10K2020-11-162-17/+64
| | | | | | | | | | | | | For encrypted receives, where user accounting is initially disabled on creation, both 'zfs userspace' and 'zfs groupspace' fails with EOPNOTSUPP: this is because dmu_objset_id_quota_upgrade_cb() forgets to set OBJSET_FLAG_USERACCOUNTING_COMPLETE on the objset flags after a successful dmu_objset_space_upgrade(). Reviewed-by: Brian Behlendorf <[email protected]> Co-authored-by: Brian Behlendorf <[email protected]> Signed-off-by: loli10K <[email protected]> Closes #9501 Closes #9596
* Fix ASSERT logic in l2arc_evict()George Amanakis2020-11-161-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case of cache device removal it is possible that at the end of l2arc_evict() we have l2ad_hand = l2ad_evict. This can lead to the following panic in case of a debug build: VERIFY3(dev->l2ad_hand < dev->l2ad_evict) failed (321920512 < 321920512) Call Trace: dump_stack+0x66/0x90 spl_panic+0xef/0x117 [spl] l2arc_remove_vdev+0x11d/0x290 [zfs] spa_load_l2cache+0x275/0x5b0 [zfs] spa_vdev_remove+0x4a5/0x6e0 [zfs] zfs_ioc_vdev_remove+0x59/0xa0 [zfs] zfsdev_ioctl_common+0x5b3/0x630 [zfs] zfsdev_ioctl+0x53/0xe0 [zfs] do_vfs_ioctl+0x42e/0x6b0 ksys_ioctl+0x5e/0x90 do_syscall_64+0x5b/0x1a0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 In case of cache device removal it also possible that l2ad_hand + distance > l2ad_end since we do not iterate l2arc_evict() and l2ad_hand is not reset. This has no functional consequence however as the cache device is about to be removed. Fix this by omitting the ASSERT in case of device removal. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: George Amanakis <[email protected]> Closes #11205
* Linux: Fix ZFS_ENTER/ZFS_EXIT/ZFS_VERFY_ZP usageBrian Behlendorf2020-11-143-18/+18
| | | | | | | | | | | | | | | | | | | The ZFS_ENTER/ZFS_EXIT/ZFS_VERFY_ZP macros should not be used in the Linux zpl_*.c source files. They return a positive error value which is correct for the common code, but not for the Linux specific kernel code which expects a negative return value. The ZPL_ENTER/ZPL_EXIT/ZPL_VERFY_ZP macros should be used instead. Furthermore, the ZPL_EXIT macro has been updated to not call the zfs_exit_fs() function. This prevents a possible deadlock which can occur when a snapshot is automatically unmounted because the zpl_show_devname() must never wait on in progress automatic snapshot unmounts. Reviewed-by: Adam Moss <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11169 Closes #11201
* Assertion failure when logging large output of channel programMatthew Ahrens2020-11-141-2/+18
| | | | | | | | | | | | | | | | | | | The output of ZFS channel programs is logged on-disk in the zpool history, and printed by `zpool history -i`. Channel programs can use 10MB of memory by default, and up to 100MB by using the `zfs program -m` flag. Therefore their output can be up to some fraction of 100MB. In addition to being somewhat wasteful of the limited space reserved for the pool history (which for large pools is 1GB), in extreme cases this can result in a failure of `ASSERT(length <= DMU_MAX_ACCESS);` in `dmu_buf_hold_array_by_dnode()`. This commit limits the output size that will be logged to 1MB. Larger outputs will not be logged, instead a entry will be logged indicating the size of the omitted output. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #11194
* Return EFAULT at the end of zfs_write() when setRyan Moeller2020-11-141-5/+7
| | | | | | | | | FreeBSD's VFS expects EFAULT from zfs_write() if we didn't complete the full write so it can retry the operation. Add some missing SET_ERRORs in zfs_write(). Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11193
* Distributed Spare (dRAID) FeatureBrian Behlendorf2020-11-1333-1388/+5357
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a new top-level vdev type called dRAID, which stands for Distributed parity RAID. This pool configuration allows all dRAID vdevs to participate when rebuilding to a distributed hot spare device. This can substantially reduce the total time required to restore full parity to pool with a failed device. A dRAID pool can be created using the new top-level `draid` type. Like `raidz`, the desired redundancy is specified after the type: `draid[1,2,3]`. No additional information is required to create the pool and reasonable default values will be chosen based on the number of child vdevs in the dRAID vdev. zpool create <pool> draid[1,2,3] <vdevs...> Unlike raidz, additional optional dRAID configuration values can be provided as part of the draid type as colon separated values. This allows administrators to fully specify a layout for either performance or capacity reasons. The supported options include: zpool create <pool> \ draid[<parity>][:<data>d][:<children>c][:<spares>s] \ <vdevs...> - draid[parity] - Parity level (default 1) - draid[:<data>d] - Data devices per group (default 8) - draid[:<children>c] - Expected number of child vdevs - draid[:<spares>s] - Distributed hot spares (default 0) Abbreviated example `zpool status` output for a 68 disk dRAID pool with two distributed spares using special allocation classes. ``` pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM slag7 ONLINE 0 0 0 draid2:8d:68c:2s-0 ONLINE 0 0 0 L0 ONLINE 0 0 0 L1 ONLINE 0 0 0 ... U25 ONLINE 0 0 0 U26 ONLINE 0 0 0 spare-53 ONLINE 0 0 0 U27 ONLINE 0 0 0 draid2-0-0 ONLINE 0 0 0 U28 ONLINE 0 0 0 U29 ONLINE 0 0 0 ... U42 ONLINE 0 0 0 U43 ONLINE 0 0 0 special mirror-1 ONLINE 0 0 0 L5 ONLINE 0 0 0 U5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 L6 ONLINE 0 0 0 U6 ONLINE 0 0 0 spares draid2-0-0 INUSE currently in use draid2-0-1 AVAIL ``` When adding test coverage for the new dRAID vdev type the following options were added to the ztest command. These options are leverages by zloop.sh to test a wide range of dRAID configurations. -K draid|raidz|random - kind of RAID to test -D <value> - dRAID data drives per group -S <value> - dRAID distributed hot spares -R <value> - RAID parity (raidz or dRAID) The zpool_create, zpool_import, redundancy, replacement and fault test groups have all been updated provide test coverage for the dRAID feature. Co-authored-by: Isaac Huang <[email protected]> Co-authored-by: Mark Maybee <[email protected]> Co-authored-by: Don Brady <[email protected]> Co-authored-by: Matthew Ahrens <[email protected]> Co-authored-by: Brian Behlendorf <[email protected]> Reviewed-by: Mark Maybee <[email protected]> Reviewed-by: Matt Ahrens <[email protected]> Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #10102
* Channel program may spuriously fail with "memory limit exhausted"Matthew Ahrens2020-11-111-6/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | ZFS channel programs (invoked by `zfs program`) are executed in a LUA sandbox with a limit on the amount of memory they can consume. The limit is 10MB by default, and can be raised to 100MB with the `-m` flag. If the memory limit is exceeded, the LUA program exits and the command fails with a message like `Channel program execution failed: Memory limit exhausted.` The LUA sandbox allocates memory with `vmem_alloc(KM_NOSLEEP)`, which will fail if the requested memory is not immediately available. In this case, the program fails with the same message, `Memory limit exhausted`. However, in this case the specified memory limit has not been reached, and the memory may only be temporarily unavailable. This commit changes the LUA memory allocator `zcp_lua_alloc()` to use `vmem_alloc(KM_SLEEP)`, so that we won't spuriously fail when memory is temporarily low. Instead, we rely on the system to be able to free up memory (e.g. by evicting from the ARC), and we assume that even at the highest memory limit of 100MB, the channel program will not truly exhaust the system's memory. External-issue: DLPX-71924 Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Matthew Ahrens <[email protected]> Closes #11190
* Linux: Fix mount/unmount when dataset name has a spaceBrian Behlendorf2020-11-111-4/+17
| | | | | | | | | | | | | | | The custom zpl_show_devname() helper should translate spaces in to the octal escape sequence \040. The getmntent(2) function is aware of this convention and properly translates the escape character back to a space when reading the fsname. Without this change the `zfs mount` and `zfs unmount` commands incorrectly detect when a dataset with a name containing spaces is mounted. Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #11182 Closes #11187
* G/C data_alloc_arenaMateusz Guzik2020-11-111-3/+1
| | | | | | | | It is a leftover from illumos always set to NULL and introducing a spurious difference between zio_buf and zio_data_buf. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Mateusz Guzik <[email protected]> Closes #11188
* Start snapdir_iterate traversals to begin wtih the value of zero.Tony Perkins2020-11-111-1/+2
| | | | | | | | | | | | The microzap hash can sometimes be zero for single digit snapnames. The zap cursor can then have a serialized value of two (for . and ..), and skip the first entry in the avl tree for the .zfs/snapshot directory listing, and therefore does not return all snapshots. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Cedric Berger <[email protected]> Signed-off-by: Tony Perkins <[email protected]> Closes #11039
* G/C struct znode -> z_movedMateusz Guzik2020-11-103-8/+0
| | | | | | | | The field is yet another leftover from unsupported zfs_znode_move. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Mateusz Guzik <[email protected]> Closes #11186
* FreeBSD: Simplify zvol_geom_open and zvol_cdev_openRyan Moeller2020-11-101-36/+16
| | | | | | | | | | | | | | We can consolidate the unlocking procedure into one place by starting with drop_suspend set to B_FALSE and moving the open count check up. While here, a little code cleanup. Match the out labels between zvol_geom_open and zvol_cdev_open, and add a missing period in some comments. Reviewed-by: Matt Macy <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11175
* FreeBSD: Avoid spurious EINTR in zvol_cdev_openRyan Moeller2020-11-101-1/+20
| | | | | | | | | | | | | zvol_first_open can fail with EINTR if spa_namespace_lock is not held and cannot be taken without waiting. Apply the same logic that was done for zvol_geom_open to take spa_namespace_lock if not already held on first open in zvol_cdev_open. Reviewed-by: Matt Macy <[email protected]> Reviewed-by: Alexander Motin <[email protected]> Signed-off-by: Ryan Moeller <[email protected]> Closes #11175