aboutsummaryrefslogtreecommitdiffstats
path: root/module
Commit message (Collapse)AuthorAgeFilesLines
* Add missing dsl pool configuration lockTim Chase2013-10-221-1/+3
| | | | | | | | | | | | | | | The semantics introduced by the restructured sync task of illumos 3464 require this lock when calling dmu_snapshot_list_next(). The pool is locked/unlocked for each iteration to reduce the chance of long-running locks. This was accidentally missed when doing the original port because ZoL's control directory code is Linux-specific and is in a different file than in illumos. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1785
* Illumos #3552George Wilson2013-10-181-10/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3552 condensing one space map burns 3 seconds of CPU in spa_sync() thread (fix race condition) References: https://www.illumos.org/issues/3552 illumos/illumos-gate@03f8c366886542ed249a15d755ae78ea4e775d9d Ported-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Porting notes: This fixes an upstream regression that was introduced in commit zfsonlinux/zfs@e51be06697762215dc3b679f8668987034a5a048, which ported the Illumos 3552 changes. This fix was added to upstream rather quickly, but at the time of the port, no one spotted it and the race was rare enough that it passed our regression tests. I discovered this when comparing our metaslab.c to the illumos metaslab.c. Without this change it is possible for metaslab_group_alloc() to consume a large amount of cpu time. Since this occurs under a mutex in a rcu critical section the kernel will log this to the console as a self-detected cpu stall as follows: INFO: rcu_sched self-detected stall on CPU { 0} (t=60000 jiffies g=11431890 c=11431889 q=18271) Closes #1687 Closes #1720 Closes #1731 Closes #1747
* Export symbols dsl_pool_config_{enter,exit}Ned Bass2013-10-101-0/+3
| | | | | | | | | | These are needed by consumers (i.e. Lustre) who wish to use the dsl_prop_register() interface to register callbacks when pool properties of interest change. This interface requires that the DSL pool configuration lock is held when called. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1762
* Fix memory leak false positive in log_internal()Brian Behlendorf2013-10-091-2/+4
| | | | | | | | | | | | | | When building the spl with --enable-debug-kmem-tracking a memory leak is detected in log_internal(). This happens to be a false positive because the memory was freed using strfree() instead of kmem_free(). All kmem_alloc()'s must be released with kmem_free() to ensure correct accounting. SPL: kmem leaked 135/5641311 bytes address size data func:line ffff8800cba7cd80 135 ZZZZZZZZZZZZZZZZ log_internal:456 Signed-off-by: Brian Behlendorf <[email protected]>
* Export addition dsl_prop_* symbolsBrian Behlendorf2013-09-251-0/+6
| | | | | | | | The recent sync task restructuring in 13fe019 introduced several new symbols which should be exported for use by consumers such as Lustre. Signed-off-by: Brian Behlendorf <[email protected]>
* Allocate the ioctl "output" nvlist with KM_PUSHPAGE.Tim Chase2013-09-251-1/+1
| | | | | | | | | | Some ZFS errors such as certain snapshot failures can occur in the sync task context. Because they may require additional memory allocations, the initial nvlist must be allocated with KM_PUSHPAGE. Signed-off-by: Brian Behlendorf <[email protected]> Issue #1746 Issue #1737
* Fix several new KM_SLEEP warningsTim Chase2013-09-253-4/+4
| | | | | | | | | A handful of allocations now occur in the sync path and need to use KM_PUSHPAGE. These were introduced by commit 13fe019. Signed-off-by: Brian Behlendorf <[email protected]> Issue #1746 Issue #1737
* Fix spa_deadman() TQ_SLEEP warningBrian Behlendorf2013-09-252-2/+2
| | | | | | | | | | The spa_deadman() and spa_sync() functions can both be run in the spa_sync context and therefore should use TQ_PUSHPAGE instead of TQ_SLEEP. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1734 Closes #1749
* Removing unneeded mutex for reading vq_pending_tree sizeGregorKopka2013-09-251-8/+1
| | | | | | | | | | | | | | | | Locking mutex &vq->vq_lock in vdev_mirror_pending is unneeded: * no data is modified * only vq_pending_tree is read * in case garbage is returned (eg. vq_pending_tree being updated while the read is made) the worst case would be that a single read could be queued on a mirror side which more busy than thought The benefit of this change is streamlining of the code path since it is taken for *every* mirror member on *every* read. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1739
* Reduce the stack usage of dsl_dataset_remove_clones_keyKohsuke Kawaguchi2013-09-251-7/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dataset_remove_clones_key does recursion, so if the recursion goes deep it can overrun the linux kernel stack size of 8KB. I have seen this happen in the actual deployment, and subsequently confirmed it by running a test workload on a custom-built kernel that uses 32KB stack. See the following stack trace as an example of the case where it would have run over the 8KB stack kernel: Depth Size Location (42 entries) ----- ---- -------- 0) 11192 72 __kmalloc+0x2e/0x240 1) 11120 144 kmem_alloc_debug+0x20e/0x500 2) 10976 72 dbuf_hold_impl+0x4a/0xa0 3) 10904 120 dbuf_prefetch+0xd3/0x280 4) 10784 80 dmu_zfetch_dofetch.isra.5+0x10f/0x180 5) 10704 240 dmu_zfetch+0x5f7/0x10e0 6) 10464 168 dbuf_read+0x71e/0x8f0 7) 10296 104 dnode_hold_impl+0x1ee/0x620 8) 10192 16 dnode_hold+0x19/0x20 9) 10176 88 dmu_buf_hold+0x42/0x1b0 10) 10088 144 zap_lockdir+0x48/0x730 11) 9944 128 zap_cursor_retrieve+0x1c4/0x2f0 12) 9816 392 dsl_dataset_remove_clones_key.isra.14+0xab/0x190 13) 9424 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 14) 9032 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 15) 8640 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 16) 8248 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 17) 7856 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 18) 7464 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 19) 7072 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 20) 6680 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 21) 6288 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 22) 5896 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 23) 5504 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 24) 5112 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 25) 4720 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 26) 4328 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 27) 3936 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 28) 3544 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 29) 3152 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 30) 2760 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 31) 2368 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 32) 1976 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 33) 1584 392 dsl_dataset_remove_clones_key.isra.14+0x10c/0x190 34) 1192 232 dsl_dataset_destroy_sync+0x311/0xf60 35) 960 72 dsl_sync_task_group_sync+0x12f/0x230 36) 888 168 dsl_pool_sync+0x48b/0x5c0 37) 720 184 spa_sync+0x417/0xb00 38) 536 184 txg_sync_thread+0x325/0x5b0 39) 352 48 thread_generic_wrapper+0x7a/0x90 40) 304 128 kthread+0xc0/0xd0 41) 176 176 ret_from_fork+0x7c/0xb0 This change reduces the stack usage in dsl_dataset_remove_clones_key by allocating structures in heap, not in stack. This is not a fundamental fix, as one can create an arbitrary large data set that runs over any fixed size stack, but this will make the problem far less likely. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Kohsuke Kawaguchi <[email protected]> Closes #1726
* Fix zpl_mknod() return valuesBrian Behlendorf2013-09-131-1/+1
| | | | | | | | | | | The zpl_mknod() function was incorrectly negating its return value. This doesn't cause any problems in the success case, but it does prevent us from returning the correct error code for a failure. The implementation of this function is now consistent with all the other zpl_* functions. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1717
* Fix uninitialized variablesBrian Behlendorf2013-09-131-2/+2
| | | | | | | | | When compiling on an ARM device using gcc 4.7.3 several variables in the zfs_obj_to_path_impl() function were flagged as uninitialized. To resolve the warnings explicitly initialize them to zero. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1716
* Fix dmu_objset_find_dp() KM_SLEEP warningTim Chase2013-09-111-1/+1
| | | | | | | | | After the restructuring in 13fe019 The 'zfs rename' command will result in a KM_SLEEP being called in the sync context. This may deadlock due to reclaim so it was changed to KM_PUSHPAGE. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1711
* Illumos #3464Matthew Ahrens2013-09-0439-5328/+5335
| | | | | | | | | | | | | | | | | 3464 zfs synctask code needs restructuring Reviewed by: Dan Kimmel <[email protected]> Reviewed by: Adam Leventhal <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Christopher Siden <[email protected]> Approved by: Garrett D'Amore <[email protected]> References: https://www.illumos.org/issues/3464 illumos/illumos-gate@3b2aab18808792cbd248a12f1edf139b89833c13 Ported-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1495
* Illumos #2882, #2883, #2900Matthew Ahrens2013-09-0418-977/+1679
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2882 implement libzfs_core 2883 changing "canmount" property to "on" should not always remount dataset 2900 "zfs snapshot" should be able to create multiple, arbitrary snapshots at once Reviewed by: George Wilson <[email protected]> Reviewed by: Chris Siden <[email protected]> Reviewed by: Garrett D'Amore <[email protected]> Reviewed by: Bill Pijewski <[email protected]> Reviewed by: Dan Kruchinin <[email protected]> Approved by: Eric Schrock <[email protected]> References: https://www.illumos.org/issues/2882 https://www.illumos.org/issues/2883 https://www.illumos.org/issues/2900 illumos/illumos-gate@4445fffbbb1ea25fd0e9ea68b9380dd7a6709025 Ported-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1293 Porting notes: WARNING: This patch changes the user/kernel ABI. That means that the zfs/zpool utilities built from master are NOT compatible with the 0.6.2 kernel modules. Ensure you load the matching kernel modules from master after updating the utilities. Otherwise the zfs/zpool commands will be unable to interact with your pool and you will see errors similar to the following: $ zpool list failed to read pool configuration: bad address no pools available $ zfs list no datasets available Add zvol minor device creation to the new zfs_snapshot_nvl function. Remove the logging of the "release" operation in dsl_dataset_user_release_sync(). The logging caused a null dereference because ds->ds_dir is zeroed in dsl_dataset_destroy_sync() and the logging functions try to get the ds name via the dsl_dataset_name() function. I've got no idea why this particular code would have worked in Illumos. This code has subsequently been completely reworked in Illumos commit 3b2aab1 (3464 zfs synctask code needs restructuring). Squash some "may be used uninitialized" warning/erorrs. Fix some printf format warnings for %lld and %llu. Apply a few spa_writeable() changes that were made to Illumos in illumos/illumos-gate.git@cd1c8b8 as part of the 3112, 3113, 3114 and 3115 fixes. Add a missing call to fnvlist_free(nvl) in log_internal() that was added in Illumos to fix issue 3085 but couldn't be ported to ZoL at the time (zfsonlinux/zfs@9e11c73) because it depended on future work.
* Use directory xattrs for symlinksBrian Behlendorf2013-08-221-0/+4
| | | | | | | | | | | | | There is currently a subtle bug in the SA implementation which can crop up which prevents us from safely using multiple variable length SAs in one object. Fortunately, the only existing use case for this are symlinks with SA based xattrs. Therefore, until the root cause in the SA code can be identified and fixed we prevent adding SA xattrs to symlinks. Signed-off-by: Brian Behlendorf <[email protected]> Issue #1468
* Revert "Evict meta data from ghost lists + l2arc headers"Brian Behlendorf2013-08-221-17/+1
| | | | | | | | This reverts commit fadd0c4da1e2ccd6014800d8b1a0fd117dd323e8 which introduced a regression in honoring the meta limit. Signed-off-by: Brian Behlendorf <[email protected]> Close #1660
* Linux 3.11 compat: fops->iterate()Richard Yao2013-08-154-122/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit torvalds/linux@2233f31aade393641f0eaed43a71110e629bb900 replaced ->readdir() with ->iterate() in struct file_operations. All filesystems must now use the new ->iterate method. To handle this the code was reworked to use the new ->iterate interface. Care was taken to keep the majority of changes confined to the ZPL layer which is already Linux specific. However, minor changes were required to the common zfs_readdir() function. Compatibility with older kernels was accomplished by adding versions of the trivial dir_emit* helper functions. Also the various *_readdir() functions were reworked in to wrappers which create a dir_context structure to pass to the new *_iterate() functions. Unfortunately, the new dir_emit* functions prevent us from passing a private pointer to the filldir function. The xattr directory code leveraged this ability through zfs_readdir() to generate the list of xattr names. Since we can no longer use zfs_readdir() a simplified zpl_xattr_readdir() function was added to perform the same task. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1653 Issue #1591
* Fix z_wr_iss_h zio_execute() import hangBrian Behlendorf2013-08-151-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | Because we need to be more frugal about our stack usage under Linux. The __zio_execute() function was modified to re-dispatch zios to a ZIO_TASKQ_ISSUE thread when we're in a context which is known to be stack heavy. Those two contexts are the sync thread and what ever thread is performing spa initialization. Unfortunately, this change introduced an unlikely bug which can result in a zio being re-dispatched indefinitely and never being executed. If during spa initialization we handle a zio with ZIO_PRIORITY_NOW it will be moved to the high priority queue. When __zio_execute() is called again for the zio it will mis- interpret the context and re-dispatch it again. The system will get stuck spinning re-dispatching the zio and making no forward progress. To fix this rare issue __zio_execute() has been updated not to re-dispatch zios on either the ZIO_TASKQ_ISSUE or ZIO_TASKQ_ISSUE_HIGH task queues. In practice this issue was rarely reported and can usually be fixed by rebooting the system and importing the pool again. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1455
* Illumos #3618 ::zio dcmd does not show timestamp dataMatthew Ahrens2013-08-122-9/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3618 ::zio dcmd does not show timestamp data Reviewed by: Adam Leventhal <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Christopher Siden <[email protected]> Reviewed by: Garrett D'Amore <[email protected]> Approved by: Dan McDonald <[email protected]> References: http://www.illumos.org/issues/3618 illumos/illumos-gate@c55e05cb35da47582b7afd38734d2f0d9c6deb40 Notes on porting to ZFS on Linux: The original changeset mostly deals with mdb ::zio dcmd. However, in order to provide the requested functionality it modifies vdev and zio structures to keep the timing data in nanoseconds instead of ticks. It is these changes that are ported over in the commit in hand. One visible change of this commit is that the default value of 'zfs_vdev_time_shift' tunable is changed: zfs_vdev_time_shift = 6 to zfs_vdev_time_shift = 29 The original value of 6 was inherited from OpenSolaris and was subotimal - since it shifted the raw tick value - it didn't compensate for different tick frequencies on Linux and OpenSolaris. The former has HZ=1000, while the latter HZ=100. (Which itself led to other interesting performance anomalies under non-trivial load. The deadline scheduler delays the IO according to its priority - the lower priority the further the deadline is set. The delay is measured in units of "shifted ticks". Since the HZ value was 10 times higher, the delay units were 10 times shorter. Thus really low priority IO like resilver (delay is 10 units) and scrub (delay is 20 units) were scheduled much sooner than intended. The overall effect is that resilver and scrub IO consumed more bandwidth at the expense of the other IO.) Now that the bookkeeping is done is nanoseconds the shift behaves correctly for any tick frequency (HZ). Ported-by: Cyril Plisko <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1643
* Linux 3.8 compat: Support CONFIG_UIDGID_STRICT_TYPE_CHECKSRichard Yao2013-08-093-7/+7
| | | | | | | | | | | | | | | | When CONFIG_UIDGID_STRICT_TYPE_CHECKS is enabled uid_t/git_t are replaced by kuid_t/kgid_t, which are structures instead of integral types. This causes any code that uses an integral type to fail to build. The User Namespace functionality introduced in Linux 3.8 requires CONFIG_UIDGID_STRICT_TYPE_CHECKS, so we could not build against any kernel that supported it. We resolve this by converting between the new kuid_t/kgid_t structures and the original uid_t/gid_t types. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1589
* Evict meta data from ghost lists + l2arc headersBrian Behlendorf2013-08-091-1/+17
| | | | | | | | | | | | | | | | | When the meta limit is exceeded the ARC evicts some meta data buffers from the mfu+mru lists. Unfortunately, for meta data heavy workloads it's possible for these buffers to accumulate on the ghost lists if arc_c doesn't exceed arc_size. To handle this case arc_adjust_meta() has been entended to explicitly evict meta data buffers from the ghost lists in proportion to what was evicted from the mfu+mru lists. If this is insufficient we request that the VFS release some inodes and dentries. This will result in the release of some dnodes which are counted as 'other' metadata. Signed-off-by: Brian Behlendorf <[email protected]>
* Allow arc_evict_ghost() to only evict meta dataBrian Behlendorf2013-08-091-9/+13
| | | | | | | | | | | | | | | | | | | | | | | The default behavior of arc_evict_ghost() is to start by evicting data buffers. Then only if the requested number of bytes to evict cannot be satisfied by data buffers move on to meta data buffers. This is ideal for honoring arc_c since it's preferable to keep the meta data cached. However, if we're trying to free memory from the arc to honor the meta limit it's a problem because we will need to discard all the data to get to the meta data. To avoid this issue the arc_evict_ghost() is now passed a fourth argumented describing which buffer type to start with. The arc_evict() function already behaves exactly like this for a same reason so this is consistent with the existing code. All existing callers have been updated to pass ARC_BUFC_DATA so this patch introduces no functional change. New callers may pass ARC_BUFC_METADATA to skip immediately to evicting meta data leaving the normal data untouched. Signed-off-by: Brian Behlendorf <[email protected]>
* Illumos #3137 L2ARC compressionSaso Kiselkov2013-08-084-76/+411
| | | | | | | | | | | | | | | | | | | | | | | 3137 L2ARC compression Reviewed by: George Wilson <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Approved by: Dan McDonald <[email protected]> References: illumos/illumos-gate@aad02571bc59671aa3103bb070ae365f531b0b62 https://www.illumos.org/issues/3137 http://wiki.illumos.org/display/illumos/L2ARC+Compression Notes for Linux port: A l2arc_nocompress module option was added to prevent the compression of l2arc buffers regardless of how a dataset's compression property is set. This allows the legacy behavior to be preserved. Ported by: James H <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1379
* Return -1 from arc_shrinker_func()Richard Yao2013-08-081-3/+1
| | | | | | | | | | | This is analogous to SPL commit zfsonlinux/spl@b9b3715. While we don't have clear evidence of systems getting caught here indefinately like in the SPL this ensures that it will never happen. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1579
* Return correct type and offset from zfs_readdirRichard Yao2013-08-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | zfs_readdir() is used by getdents(), which provides a list of all files in directory, their types and an offset that be used by llseek() to seek to the next directory entry. On Solaris, the first two directory entries "." and ".." respectively have offsets 1 and 2 on ZFS while the other files have rather large numbers. Currently, ZFSOnLinux is giving "." offset 0 and all other entries large numbers. The first entry's next entry offset points to itself, which causes software that uses llseek() in conjunction with getdents() for filesystem navigation to enter an infinite loop. The offsets used for each directory entry are filesystem specific on all platforms, so we can fix this by adopting the Solaris behavior. Also, we currently report each directory entry as having type 0 (???). This is not wrong, but we can do better. getdents() on Solaris does not appear to provide this information, but it does on Linux and Mac OS X do. ZFS provides easy access to type information in zfs_readdir(), so this patch provides this as well. Reported-by: Andrey <[email protected]> Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1624
* Illumos #3639 zpool.cache should skip over readonly poolsGeorge Wilson2013-08-071-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | 3639 zpool.cache should skip over readonly pools Reviewed by: Eric Schrock <[email protected]> Reviewed by: Adam Leventhal <[email protected]> Reviewed by: Basil Crow <[email protected]> Approved by: Gordon Ross <[email protected]> References: illumos/illumos-gate@fb02ae025247e3b662600e5a9c1b4c33ecab7d72 https://www.illumos.org/issues/3639 Normally we don't list pools that are imported read-only in the cache file, however you can accidentally get one into the cache file by importing and exporting a read-write pool while a read-only pool is imported: $ zpool import -o readonly test1 $ zpool import test2 $ zpool export test2 $ zdb -C This is a problem because if the machine reboots we import all pools in the cache file as read-write. Ported-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]>
* Write dirty inodes on closeBrian Behlendorf2013-08-071-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the property atime=on is set operations which only access and inode do cause an atime update. However, it turns out that dirty inodes with updated atimes are only written to disk when the inodes get evicted from the cache. Somewhat surprisingly the source suggests that this isn't a ZoL specific issue. This behavior may in part explain why zfs's reclaim logic has been observed to be slow. When reclaiming inodes its likely that they have a dirty atime which will force a write to disk. Obviously we don't want to force a write to disk for every atime update, these needs to be batched. The right way to do this is to fully implement the .dirty_inode and .write_inode callbacks. However, to do that right requires proper unification of some fields in the znode/inode. Then we could just mark the inode dirty and leave it to the VFS to call .write_inode periodically. Until that work gets done we have to settle for some middle ground. The simplest and safest thing we can do for now is to write the dirty inode on last close. This should prevent the majority of inodes in the cache from having dirty atimes and not drastically increase the number of writes. Some rudimentally testing to show how long it takes to drop 500,000 inodes from the cache shows promising results. This is as expected because we're no longer do lots of IO as part of the eviction, it was done earlier during the close. w/out patch: ~30s to drop 500,000 inodes with drop_caches. with patch: ~3s to drop 500,000 inodes with drop_caches. Signed-off-by: Brian Behlendorf <[email protected]>
* Export additional dmu symbolsBrian Behlendorf2013-08-011-0/+6
| | | | | | | | The dmu_prefetch, dmu_free_long_range, dmu_free_object, dmu_prealloc, dmu_write_policy, and dmu_sync symbols have been exported so they may be used by other modules. Signed-off-by: Brian Behlendorf <[email protected]>
* dmu_tx: Fix possible NULL pointer dereferenceNathaniel Clark2013-08-011-2/+5
| | | | | | | | | | dmu_tx_hold_object_impl can return NULL on error. Check for this condition prior to dereferencing pointer. This can only occur if the passed object was invalid or unallocated. Signed-off-by: Nathaniel Clark <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1610
* Remove b_thawed from arc_buf_hdr_tRichard Yao2013-08-011-11/+0
| | | | | | | | The code involving b_thawed appears to be dead, so lets discard it. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #1614
* Remove arc_data_buf_alloc()/arc_data_buf_free()Richard Yao2013-08-011-17/+0
| | | | | | | | | | These functions are used in neither Illumos nor ZFSOnLinux. They appear to have been replaced by arc_buf_alloc()/arc_buf_free(), so lets remove them. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #1614
* Remove zio_alloc_arenaRichard Yao2013-08-011-6/+0
| | | | | | | | | | We declare zio_alloc_arena using extern, but it does not appear to exist anywhere in the code. This permits undefined behavior, so lets remove it. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #1614
* Make arc+l2arc module options writableBrian Behlendorf2013-07-301-49/+60
| | | | | | | The l2arc module options can be made safely writable. This allows the options to be changed without unloading/loading the modules. Signed-off-by: Brian Behlendorf <[email protected]>
* Change l2arc_norw default to zeroBrian Behlendorf2013-07-291-1/+1
| | | | | | | | | | These days modern SSDs can efficiently service concurrent reads and writes. When this flag was added that wasn't really the case for a variety of SSD controllers. But now we can set the default value to take advantage of this parallelism and only disable this as needed for specific troublesome hardware. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix inaccurate arcstat_l2_hdr_size calculationsYing Zhu2013-07-291-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on the comments in arc.c we know that buffers can exist both in arc and l2arc, under this circumstance both arc_buf_hdr_t and l2arc_buf_hdr_t will be allocated. However the current logic only cares for memory that l2arc_buf_hdr takes up when the buffer's state transfers from or to arc_l2c_only. This will cause obvious deviations for illumos's zfs version since the sizeof(l2arc_buf_hdr) is larger than ZOL's. We can implement the calcuation in the following simple way: 1. When allocate a l2arc_buf_hdr_t we add its memory consumption instantly and subtract it when we free or evict the l2arc buf. 2. According to l2arc_hdr_stat_add and l2arc_hdr_stat_remove, if the buffer only stays in l2arc we should also add the memory its arc_buf_hdr_t consumes, so we only need to add HDR_SIZE to arcstat_l2_hdr_size since we already concerned with L2HDR_SIZE in step 1 and the same for transfering arc bufs from l2arc only state. The testbox has 2 4-core Intel Xeon CPUs(2.13GHz), with 16GB memory and tests were set upped in the following way: 1. Fdisked a SATA disk into two partitions, one partition for zpool storage and the other one was used as the cache device. 2. Generated some files occupying 14GB altogether in the zpool prepared in step 1 using iozone. 3. Read them all using md5sum and watched the l2arc related statistics in /proc/spl/kstat/zfs/arcstats. After the reading ended the l2_hdr_size and l2_size were shown like this: l2_size 4 4403780608 l2_hdr_size 4 0 which was weird. 4. After applying this patch and reran step 1-3, the results were as following: l2_size 4 4306443264 l2_hdr_size 4 535600 these numbers made sense, on 64-bit systems the sizeof(l2arc_buf_hdr_t) is 16 bytes. Assue all blocks cached by l2arc are 128KB, so 535600/16*128*1024=4387635200, since not all blocks are equal-sized, the theoretical result will be a little bigger, as we can see. Since I'm familiar with systemtap instrumentation tool I used it to examine what had happened. The script looked like this: probe module("zfs").function("arc_chage_state") { if ($new_state == $arc_l2_only) printf("change arc buf to arc_l2_only\n") } It will print out some information each time we call funciton arc_chage_state if the argument new_state is arc_l2_only. I gathered the trace logs and found that none of the arc bufs ran into arc state arc_l2_only when the tests was running, this was the reason why l2_hdr_size in step 3 was 0. The arc bufs fell into arc_l2_only when the pool or the filesystem was offlined. Signed-off-by: Ying Zhu <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]>
* Fix arc_adapt() spinning in iterate_supers_type()Brian Behlendorf2013-07-171-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | The iterate_supers_type() function which was introduced in the 3.0 kernel was supposed to provide a safe way to call an arbitrary function on all super blocks of a specific type. Unfortunately, because a list_head was used a bug was introduced which made it possible for iterate_supers_type() to get stuck spinning on a super block which was just deactivated. This can occur because when the list head is removed from the fs_supers list it is reinitialized to point to itself. If the iterate_supers_type() function happened to be processing the removed list_head it will get stuck spinning on that list_head. The bug was fixed in the 3.3 kernel by converting the list_head to an hlist_node. However, to resolve the issue for existing 3.0 - 3.2 kernels we detect when a list_head is used. Then to prevent the spinning from occurring the .next pointer is set to the fs_supers list_head which ensures the iterate_supers_type() function will always terminate. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1045 Closes #861 Closes #790
* Fix read-only pool hang on unmountBrian Behlendorf2013-07-171-1/+5
| | | | | | | | | | | During mount a filesystem dataset would have the MS_RDONLY bit incorrectly cleared even if the entire pool was read-only. There is existing to code to handle this case but it was being run before the property callbacks were registered. To resolve the issue we move this read-only code after the callback registration. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1338
* Fix zfsctl_expire_snapshot() deadlockBrian Behlendorf2013-07-121-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is possible for an automounted snapshot which is expiring to deadlock with a manual unmount of the snapshot. This can occur because taskq_cancel_id() will block if the task is currently executing until it completes. But it will never complete because zfsctl_unmount_snapshot() is holding the zsb->z_ctldir_lock which zfsctl_expire_snapshot() must acquire. ---------------------- z_unmount/0:2153 --------------------- mutex_lock <blocking on zsb->z_ctldir_lock> zfsctl_unmount_snapshot zfsctl_expire_snapshot taskq_thread ------------------------- zfs:10690 ------------------------- taskq_wait_id <waiting for z_unmount to exit> taskq_cancel_id __zfsctl_unmount_snapshot zfsctl_unmount_snapshot <takes zsb->z_ctldir_lock> zfs_unmount_snap zfs_ioc_destroy_snaps_nvl zfsdev_ioctl do_vfs_ioctl We resolve the deadlock by dropping the zsb->z_ctldir_lock before calling __zfsctl_unmount_snapshot(). The lock is only there to prevent concurrent modification to the zsb->z_ctldir_snaps AVL tree. Moreover, we're careful to remove the zfs_snapentry_t from the AVL tree before dropping the lock which ensures no other tasks can find it. On failure it's added back to the tree. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chris Dunlap <[email protected]> Closes #1527
* Improve N-way mirror performanceBrian Behlendorf2013-07-111-3/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The read bandwidth of an N-way mirror can by increased by 50%, and the IOPs by 10%, by more carefully selecting the preferred leaf vdev. The existing algorthm selects a perferred leaf vdev based on offset of the zio request modulo the number of members in the mirror. It assumes the drives are of equal performance and that spreading the requests randomly over both drives will be sufficient to saturate them. In practice this results in the leaf vdevs being under utilized. Utilization can be improved by preferentially selecting the leaf vdev with the least pending IO. This prevents leaf vdevs from being starved and compensates for performance differences between disks in the mirror. Faster vdevs will be sent more work and the mirror performance will not be limitted by the slowest drive. In the common case where all the pending queues are full and there is no single least busy leaf vdev a batching stratagy is employed. Of the N least busy vdevs one is selected with equal probability to be the preferred vdev for T microseconds. Compared to randomly selecting a vdev to break the tie batching the requests greatly improves the odds of merging the requests in the Linux elevator. The testing results show a significant performance improvement for all four workloads tested. The workloads were generated using the fio benchmark and are as follows. 1) 1MB sequential reads from 16 threads to 16 files (MB/s). 2) 4KB sequential reads from 16 threads to 16 files (MB/s). 3) 1MB random reads from 16 threads to 16 files (IOP/s). 4) 4KB random reads from 16 threads to 16 files (IOP/s). | Pristine | With 1461 | | Sequential Random | Sequential Random | | 1MB 4KB 1MB 4KB | 1MB 4KB 1MB 4KB | | MB/s MB/s IO/s IO/s | MB/s MB/s IO/s IO/s | ---------------+-----------------------+------------------------+ 2 Striped | 226 243 11 304 | 222 255 11 299 | 2 2-Way Mirror | 302 324 16 534 | 433 448 23 571 | 2 3-Way Mirror | 429 458 24 714 | 648 648 41 808 | 2 4-Way Mirror | 562 601 36 849 | 816 828 82 926 | Signed-off-by: Brian Behlendorf <[email protected]> Closes #1461
* Add new kstat for monitoring time in dmu_tx_assignPrakash Surya2013-07-112-0/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change adds a new kstat to gain some visibility into the amount of time spent in each call to dmu_tx_assign. A histogram is exported via a new dmu_tx_assign_histogram-$POOLNAME file. The information contained in this histogram is the frequency dmu_tx_assign took to complete given an interval range. For example, given the below histogram file: $ cat /proc/spl/kstat/zfs/dmu_tx_assign_histogram-tank 12 1 0x01 32 1536 19792068076691 20516481514522 name type data 1 us 4 859 2 us 4 252 4 us 4 171 8 us 4 2 16 us 4 0 32 us 4 2 64 us 4 0 128 us 4 0 256 us 4 0 512 us 4 0 1024 us 4 0 2048 us 4 0 4096 us 4 0 8192 us 4 0 16384 us 4 0 32768 us 4 1 65536 us 4 1 131072 us 4 1 262144 us 4 4 524288 us 4 0 1048576 us 4 0 2097152 us 4 0 4194304 us 4 0 8388608 us 4 0 16777216 us 4 0 33554432 us 4 0 67108864 us 4 0 134217728 us 4 0 268435456 us 4 0 536870912 us 4 0 1073741824 us 4 0 2147483648 us 4 0 one can see most calls to dmu_tx_assign completed in 32us or less, but a few outliers did not. Specifically, 4 of the calls took between 262144us and 131072us. This information is difficult, if not impossible, to gather without this change. Signed-off-by: Prakash Surya <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1584
* Log pool suspension warnings to the consoleBrian Behlendorf2013-07-101-0/+3
| | | | | | | | | In the event that a pool gets suspended log this information to the console. This is critical information and we want to make sure it gets logged. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1555
* Use GFP_NOIO in vdev_disk_io_flush()Brian Behlendorf2013-07-101-1/+1
| | | | | | | | | To avoid a potential deadlock when using a zvol as a swap device prevent vdev_disk_io_flush() from performing IO during the bio_alloc(). Signed-off-by: Brian Behlendorf <[email protected]> Closes #1508
* Improve code in arc_buf_remove_refYing Zhu2013-07-091-1/+2
| | | | | | | | | | When we remove references of arc bufs in the arc_anon state we needn't take its header's hash_lock, so postpone it to where we really need it to avoid unnecessary invocations of function buf_hash. Signed-off-by: Ying Zhu <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1557
* Update zio.cShen Yan2013-07-091-1/+1
| | | | | | | The cv_wait_io is used to account io time instead of cv_wait. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1566
* Add zfs_autoimport_disable tunableBrian Behlendorf2013-07-091-0/+8
| | | | | | | | | | There are times when it is desirable for zfs to not automatically populate the spa namespace at module load time using the pools in the /etc/zfs/zpool.cache file. The zfs_autoimport_disable module option has been added to control this behavior. Signed-off-by: Brian Behlendorf <[email protected]> Issue #330
* 3.10 API change: block_device_operations->release() returns voidChris Dunlop2013-07-081-0/+6
| | | | | | | | | | Linux kernel commit torvalds/linux@db2a144 changed the return type of block_device_operations->release() to void. Detect the expected prototype and defined our callout accordingly. Signed-off-by: Chris Dunlop <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1494
* Open pools asynchronously after module loadBrian Behlendorf2013-07-031-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | One of the side effects of calling zvol_create_minors() in zvol_init() is that all pools listed in the cache file will be opened. Depending on the state and contents of your pool this operation can take a considerable length of time. Doing this at load time is undesirable because the kernel is holding a global module lock. This prevents other modules from loading and can serialize an otherwise parallel boot process. Doing this after module inititialization also reduces the chances of accidentally introducing a race during module init. To ensure that /dev/zvol/<pool>/<dataset> devices are still automatically created after the module load completes a udev rules has been added. When udev notices that the /dev/zfs device has been create the 'zpool list' command will be run. This then will cause all the pools listed in the zpool.cache file to be opened. Because this process in now driven asynchronously by udev there is the risk of problems in downstream distributions. Signed-off-by: Brian Behlendorf <[email protected]> Issue #756 Issue #1020 Issue #1234
* Cleanup zvol initialization codeRichard Yao2013-07-031-10/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following error will occur on some (possibly all) kernels because blk_init_queue() will try to take the spinlock before we initialize it. BUG: spinlock bad magic on CPU#0, zpool/4054 lock: 0xffff88021a73de60, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 Pid: 4054, comm: zpool Not tainted 3.9.3 #11 Call Trace: [<ffffffff81478ef8>] spin_dump+0x8c/0x91 [<ffffffff81478f1e>] spin_bug+0x21/0x26 [<ffffffff812da097>] do_raw_spin_lock+0x127/0x130 [<ffffffff8147d851>] _raw_spin_lock_irq+0x21/0x30 [<ffffffff812c2c1e>] cfq_init_queue+0x1fe/0x350 [<ffffffff812aacb8>] elevator_init+0x78/0x140 [<ffffffff812b2677>] blk_init_allocated_queue+0x87/0xb0 [<ffffffff812b26d5>] blk_init_queue_node+0x35/0x70 [<ffffffff812b271e>] blk_init_queue+0xe/0x10 [<ffffffff8125211b>] __zvol_create_minor+0x24b/0x620 [<ffffffff81253264>] zvol_create_minors_cb+0x24/0x30 [<ffffffff811bd9ca>] dmu_objset_find_spa+0xea/0x510 [<ffffffff811bda71>] dmu_objset_find_spa+0x191/0x510 [<ffffffff81253ea2>] zvol_create_minors+0x92/0x180 [<ffffffff811f8d80>] spa_open_common+0x250/0x380 [<ffffffff811f8ece>] spa_open+0xe/0x10 [<ffffffff8122817e>] pool_status_check.part.22+0x1e/0x80 [<ffffffff81228a55>] zfsdev_ioctl+0x155/0x190 [<ffffffff8116a695>] do_vfs_ioctl+0x325/0x5a0 [<ffffffff8116a950>] sys_ioctl+0x40/0x80 [<ffffffff814812c9>] ? do_page_fault+0x9/0x10 [<ffffffff81483929>] system_call_fastpath+0x16/0x1b zd0: unknown partition table We fix this by calling spin_lock_init before blk_init_queue. The manner in which zvol_init() initializes structures is suspectible to a race between initialization and a probe on a zvol. We reorganize zvol_init() to prevent that. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]>
* Call zvol_create_minors() in spa_open_common() when initializing poolPawel Jakub Dawidek2013-07-032-3/+13
| | | | | | | | | | | | | | | There is an extremely odd bug that causes zvols to fail to appear on some systems, but not others. Recently, I was able to consistently reproduce this issue over a period of 1 month. The issue disappeared after I applied this change from FreeBSD. This is from FreeBSD's pool version 28 import, which occurred in revision 219089. Ported-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #441 Issue #599