aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Revert "Evict meta data from ghost lists + l2arc headers"Brian Behlendorf2013-08-221-17/+1
| | | | | | | | This reverts commit fadd0c4da1e2ccd6014800d8b1a0fd117dd323e8 which introduced a regression in honoring the meta limit. Signed-off-by: Brian Behlendorf <[email protected]> Close #1660
* Implement database to workaround misreported physical sector sizesRichard Yao2013-08-221-4/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements vdev_bdev_database_check(). It alters the detected sector size of any device listed in a database of drives known to lie about their physical sector sizes. This is based on "6931570 Add flash devices' VID/PID to disk table to advertising 4K physical sector size" from Open Solaris and on sg_simple4.c from sg3_utils. About two dozen lines are taken from sg_simple4.c, which is GPLv2 licensed. However, sg_simple4.c is analogous to a Hello World program and is safe for us to use. We requested that Douglas Gilbert, the author of sg_simple4.c, confirm that this is the case. A cutdown version of his response is as follows: ``` I would consider a SCSI INQUIRY example using the Linux sg driver interface (also written by me) as the equivalent of an "hello world" program in C. ``` The database was created with the help of the freenode and ZFSOnLinux communities. Some notes: 1. The following drives both were confirmed to lie via reports in IRC and they contain capacity information in their identifiers: INTEL SSDSA2M080 INTEL SSDSA2M160 M4-CT256M4SSD2 WDC WD15EARS-00S WDC WD15EARS-00Z WDC WD20EARS-00M The identifiers for different capacity models were extrapolated and added under the assumption that those models also lie. Google was used to verify that the extrapolated drive identifiers existed prior to their inclusion. 2. The OCZ-VERTEX2 3.5 identifer applies to two drives that differ solely in page size (and slightly in capacity). One uses 4096-byte pages and the other uses 8192-byte pages. Both are set to use 8192-byte pages. We could detect the page size by checking the capacity, but that would unnecessarily complicate the code. 3. It is possible for updated drive firmware to correctly report the sector size. There were reports of a few advanced format drives doing that. One report stated that the vendor changed the identification string while another was unclear on this. Both reports involved WDC models. 4. Google was used to determine the size of pages in the listed flash devices. Reports of 8192-byte pages took precedence over reports of 4096-byte pages. 5. Devices behind USB adapters can have their identification strings altered. Identification strings obtained across USB adapters are omitted and no attempt is made to correct for alterations made by USB adapters when doing comparisons against the database. Two entries in the Open Solaris database that appear to have been altered by a USB adapter were omitted. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1652
* Linux 3.11 compat: fops->iterate()Richard Yao2013-08-158-124/+260
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit torvalds/linux@2233f31aade393641f0eaed43a71110e629bb900 replaced ->readdir() with ->iterate() in struct file_operations. All filesystems must now use the new ->iterate method. To handle this the code was reworked to use the new ->iterate interface. Care was taken to keep the majority of changes confined to the ZPL layer which is already Linux specific. However, minor changes were required to the common zfs_readdir() function. Compatibility with older kernels was accomplished by adding versions of the trivial dir_emit* helper functions. Also the various *_readdir() functions were reworked in to wrappers which create a dir_context structure to pass to the new *_iterate() functions. Unfortunately, the new dir_emit* functions prevent us from passing a private pointer to the filldir function. The xattr directory code leveraged this ability through zfs_readdir() to generate the list of xattr names. Since we can no longer use zfs_readdir() a simplified zpl_xattr_readdir() function was added to perform the same task. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1653 Issue #1591
* Fix z_wr_iss_h zio_execute() import hangBrian Behlendorf2013-08-151-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | Because we need to be more frugal about our stack usage under Linux. The __zio_execute() function was modified to re-dispatch zios to a ZIO_TASKQ_ISSUE thread when we're in a context which is known to be stack heavy. Those two contexts are the sync thread and what ever thread is performing spa initialization. Unfortunately, this change introduced an unlikely bug which can result in a zio being re-dispatched indefinitely and never being executed. If during spa initialization we handle a zio with ZIO_PRIORITY_NOW it will be moved to the high priority queue. When __zio_execute() is called again for the zio it will mis- interpret the context and re-dispatch it again. The system will get stuck spinning re-dispatching the zio and making no forward progress. To fix this rare issue __zio_execute() has been updated not to re-dispatch zios on either the ZIO_TASKQ_ISSUE or ZIO_TASKQ_ISSUE_HIGH task queues. In practice this issue was rarely reported and can usually be fixed by rebooting the system and importing the pool again. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1455
* Don't specifically open /etc/mtab - it is done in libzfs_init()Turbo Fredriksson2013-08-151-8/+3
| | | | | | | a few lines further down and we can share the open file handle. Signed-off-by: Brian Behlendorf <[email protected]> Issue #1498
* No point in rewind() mtab in zfs_unshare_proto(). We're not reallyTurbo Fredriksson2013-08-151-1/+0
| | | | | | | | reading the file, but instead use libzfs_mnttab_find() which does the nessesary freopen() for us. Signed-off-by: Brian Behlendorf <[email protected]> Issue #1498
* Use setmntent() OR fopen()Turbo Fredriksson2013-08-151-0/+4
| | | | | | | | For the same reasons it's used in libzfs_init(), this was just overlooked because zinject gets minimal use. Signed-off-by: Brian Behlendorf <[email protected]> Issue #1498
* Fix for re-reading /etc/mtab in zfs_is_mounted()John Layman2013-08-141-4/+15
| | | | | | | | | | | | When /etc/mtab is updated on Linux it's done atomically with rename(2). A new mtab is written, the existing mtab is unlinked, and the new mtab is renamed to /etc/mtab. This means that we must close the old file and open the new file to get the updated contents. Using rewind(3) will just move the file pointer back to the start of the file, freopen(3) will close and open the file. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1611
* Illumos #3098 zfs userspace/groupspace failYuri Pankov2013-08-142-22/+26
| | | | | | | | | | | | | | 3098 zfs userspace/groupspace fail without saying why when run as non-root Reviewed by: Eric Schrock <[email protected]> Approved by: Richard Lowe <[email protected]> References: https://www.illumos.org/issues/3098 illumos/illumos-gate@70f56fa69343b013f47e010537cff8ef3a7a40a5 Ported-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1596
* Illumos #3618 ::zio dcmd does not show timestamp dataMatthew Ahrens2013-08-124-13/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3618 ::zio dcmd does not show timestamp data Reviewed by: Adam Leventhal <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Christopher Siden <[email protected]> Reviewed by: Garrett D'Amore <[email protected]> Approved by: Dan McDonald <[email protected]> References: http://www.illumos.org/issues/3618 illumos/illumos-gate@c55e05cb35da47582b7afd38734d2f0d9c6deb40 Notes on porting to ZFS on Linux: The original changeset mostly deals with mdb ::zio dcmd. However, in order to provide the requested functionality it modifies vdev and zio structures to keep the timing data in nanoseconds instead of ticks. It is these changes that are ported over in the commit in hand. One visible change of this commit is that the default value of 'zfs_vdev_time_shift' tunable is changed: zfs_vdev_time_shift = 6 to zfs_vdev_time_shift = 29 The original value of 6 was inherited from OpenSolaris and was subotimal - since it shifted the raw tick value - it didn't compensate for different tick frequencies on Linux and OpenSolaris. The former has HZ=1000, while the latter HZ=100. (Which itself led to other interesting performance anomalies under non-trivial load. The deadline scheduler delays the IO according to its priority - the lower priority the further the deadline is set. The delay is measured in units of "shifted ticks". Since the HZ value was 10 times higher, the delay units were 10 times shorter. Thus really low priority IO like resilver (delay is 10 units) and scrub (delay is 20 units) were scheduled much sooner than intended. The overall effect is that resilver and scrub IO consumed more bandwidth at the expense of the other IO.) Now that the bookkeeping is done is nanoseconds the shift behaves correctly for any tick frequency (HZ). Ported-by: Cyril Plisko <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1643
* Linux 3.8 compat: Support CONFIG_UIDGID_STRICT_TYPE_CHECKSRichard Yao2013-08-093-7/+7
| | | | | | | | | | | | | | | | When CONFIG_UIDGID_STRICT_TYPE_CHECKS is enabled uid_t/git_t are replaced by kuid_t/kgid_t, which are structures instead of integral types. This causes any code that uses an integral type to fail to build. The User Namespace functionality introduced in Linux 3.8 requires CONFIG_UIDGID_STRICT_TYPE_CHECKS, so we could not build against any kernel that supported it. We resolve this by converting between the new kuid_t/kgid_t structures and the original uid_t/gid_t types. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1589
* Evict meta data from ghost lists + l2arc headersBrian Behlendorf2013-08-091-1/+17
| | | | | | | | | | | | | | | | | When the meta limit is exceeded the ARC evicts some meta data buffers from the mfu+mru lists. Unfortunately, for meta data heavy workloads it's possible for these buffers to accumulate on the ghost lists if arc_c doesn't exceed arc_size. To handle this case arc_adjust_meta() has been entended to explicitly evict meta data buffers from the ghost lists in proportion to what was evicted from the mfu+mru lists. If this is insufficient we request that the VFS release some inodes and dentries. This will result in the release of some dnodes which are counted as 'other' metadata. Signed-off-by: Brian Behlendorf <[email protected]>
* Allow arc_evict_ghost() to only evict meta dataBrian Behlendorf2013-08-091-9/+13
| | | | | | | | | | | | | | | | | | | | | | | The default behavior of arc_evict_ghost() is to start by evicting data buffers. Then only if the requested number of bytes to evict cannot be satisfied by data buffers move on to meta data buffers. This is ideal for honoring arc_c since it's preferable to keep the meta data cached. However, if we're trying to free memory from the arc to honor the meta limit it's a problem because we will need to discard all the data to get to the meta data. To avoid this issue the arc_evict_ghost() is now passed a fourth argumented describing which buffer type to start with. The arc_evict() function already behaves exactly like this for a same reason so this is consistent with the existing code. All existing callers have been updated to pass ARC_BUFC_DATA so this patch introduces no functional change. New callers may pass ARC_BUFC_METADATA to skip immediately to evicting meta data leaving the normal data untouched. Signed-off-by: Brian Behlendorf <[email protected]>
* Illumos #3964 L2ARC should always compress metadata buffersSaso Kiselkov2013-08-083-2/+5
| | | | | | | | | | | | 3964 L2ARC should always compress metadata buffers Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> References: https://www.illumos.org/issues/3964 Ported-by: Brian Behlendorf <[email protected]> Closes #1379
* Illumos #3137 L2ARC compressionSaso Kiselkov2013-08-087-79/+423
| | | | | | | | | | | | | | | | | | | | | | | 3137 L2ARC compression Reviewed by: George Wilson <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Approved by: Dan McDonald <[email protected]> References: illumos/illumos-gate@aad02571bc59671aa3103bb070ae365f531b0b62 https://www.illumos.org/issues/3137 http://wiki.illumos.org/display/illumos/L2ARC+Compression Notes for Linux port: A l2arc_nocompress module option was added to prevent the compression of l2arc buffers regardless of how a dataset's compression property is set. This allows the legacy behavior to be preserved. Ported by: James H <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1379
* Return -1 from arc_shrinker_func()Richard Yao2013-08-081-3/+1
| | | | | | | | | | | This is analogous to SPL commit zfsonlinux/spl@b9b3715. While we don't have clear evidence of systems getting caught here indefinately like in the SPL this ensures that it will never happen. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1579
* Return correct type and offset from zfs_readdirRichard Yao2013-08-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | zfs_readdir() is used by getdents(), which provides a list of all files in directory, their types and an offset that be used by llseek() to seek to the next directory entry. On Solaris, the first two directory entries "." and ".." respectively have offsets 1 and 2 on ZFS while the other files have rather large numbers. Currently, ZFSOnLinux is giving "." offset 0 and all other entries large numbers. The first entry's next entry offset points to itself, which causes software that uses llseek() in conjunction with getdents() for filesystem navigation to enter an infinite loop. The offsets used for each directory entry are filesystem specific on all platforms, so we can fix this by adopting the Solaris behavior. Also, we currently report each directory entry as having type 0 (???). This is not wrong, but we can do better. getdents() on Solaris does not appear to provide this information, but it does on Linux and Mac OS X do. ZFS provides easy access to type information in zfs_readdir(), so this patch provides this as well. Reported-by: Andrey <[email protected]> Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1624
* Illumos #3639 zpool.cache should skip over readonly poolsGeorge Wilson2013-08-071-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | 3639 zpool.cache should skip over readonly pools Reviewed by: Eric Schrock <[email protected]> Reviewed by: Adam Leventhal <[email protected]> Reviewed by: Basil Crow <[email protected]> Approved by: Gordon Ross <[email protected]> References: illumos/illumos-gate@fb02ae025247e3b662600e5a9c1b4c33ecab7d72 https://www.illumos.org/issues/3639 Normally we don't list pools that are imported read-only in the cache file, however you can accidentally get one into the cache file by importing and exporting a read-write pool while a read-only pool is imported: $ zpool import -o readonly test1 $ zpool import test2 $ zpool export test2 $ zdb -C This is a problem because if the machine reboots we import all pools in the cache file as read-write. Ported-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]>
* Write dirty inodes on closeBrian Behlendorf2013-08-071-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the property atime=on is set operations which only access and inode do cause an atime update. However, it turns out that dirty inodes with updated atimes are only written to disk when the inodes get evicted from the cache. Somewhat surprisingly the source suggests that this isn't a ZoL specific issue. This behavior may in part explain why zfs's reclaim logic has been observed to be slow. When reclaiming inodes its likely that they have a dirty atime which will force a write to disk. Obviously we don't want to force a write to disk for every atime update, these needs to be batched. The right way to do this is to fully implement the .dirty_inode and .write_inode callbacks. However, to do that right requires proper unification of some fields in the znode/inode. Then we could just mark the inode dirty and leave it to the VFS to call .write_inode periodically. Until that work gets done we have to settle for some middle ground. The simplest and safest thing we can do for now is to write the dirty inode on last close. This should prevent the majority of inodes in the cache from having dirty atimes and not drastically increase the number of writes. Some rudimentally testing to show how long it takes to drop 500,000 inodes from the cache shows promising results. This is as expected because we're no longer do lots of IO as part of the eviction, it was done earlier during the close. w/out patch: ~30s to drop 500,000 inodes with drop_caches. with patch: ~3s to drop 500,000 inodes with drop_caches. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix man page for the sync propertySteven Burgess2013-08-071-2/+2
| | | | | | | | | | | | | | | | | The help output of for zfs set/get says that sync can be one of standard | always | disabled but the man pages claim it can be sync=default | always | disabled the accepted value is standard, this changes the manpage to give the correct values. Signed-off-by: Steven Burgess <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1634
* Fix the default checksum algorithm in the manpageMassimo Maggi2013-08-071-1/+1
| | | | | | | | | The manpage reports fletcher2, but in zio.h ZIO_CHECKSUM_ON_VALUE is defined to ZIO_CHECKSUM_FLETCHER_4. Signed-off-by: Massimo Maggi <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1628
* Add kmod repo integrationBrian Behlendorf2013-08-013-170/+81
| | | | | | | | | | | | | | | | | | When the kmod packaging infrastructure was originally added the dependency on the rpmfusion yum repositories was disabled. This was done at the time in favour of getting local builds working. Now the time has come to conditionally re-enable that functionality so we can properly provide binary kmod packages. ./configure --with-config=srpm make SRPM_DEFINE_KMOD='--define="repo rpmfusion"' srpm-kmod mock rebuild zfs-kmod-x.y.z-r.el6.src.rpm One nice benefit of finishing this work is that the generic and fedora spl-kmod spec files can be merged again. Signed-off-by: Brian Behlendorf <[email protected]>
* Export additional dmu symbolsBrian Behlendorf2013-08-011-0/+6
| | | | | | | | The dmu_prefetch, dmu_free_long_range, dmu_free_object, dmu_prealloc, dmu_write_policy, and dmu_sync symbols have been exported so they may be used by other modules. Signed-off-by: Brian Behlendorf <[email protected]>
* dmu_tx: Fix possible NULL pointer dereferenceNathaniel Clark2013-08-011-2/+5
| | | | | | | | | | dmu_tx_hold_object_impl can return NULL on error. Check for this condition prior to dereferencing pointer. This can only occur if the passed object was invalid or unallocated. Signed-off-by: Nathaniel Clark <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1610
* Remove b_thawed from arc_buf_hdr_tRichard Yao2013-08-011-11/+0
| | | | | | | | The code involving b_thawed appears to be dead, so lets discard it. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #1614
* Remove arc_data_buf_alloc()/arc_data_buf_free()Richard Yao2013-08-012-19/+0
| | | | | | | | | | These functions are used in neither Illumos nor ZFSOnLinux. They appear to have been replaced by arc_buf_alloc()/arc_buf_free(), so lets remove them. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #1614
* Remove zio_alloc_arenaRichard Yao2013-08-011-6/+0
| | | | | | | | | | We declare zio_alloc_arena using extern, but it does not appear to exist anywhere in the code. This permits undefined behavior, so lets remove it. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #1614
* Make arc+l2arc module options writableBrian Behlendorf2013-07-301-49/+60
| | | | | | | The l2arc module options can be made safely writable. This allows the options to be changed without unloading/loading the modules. Signed-off-by: Brian Behlendorf <[email protected]>
* Change l2arc_norw default to zeroBrian Behlendorf2013-07-291-1/+1
| | | | | | | | | | These days modern SSDs can efficiently service concurrent reads and writes. When this flag was added that wasn't really the case for a variety of SSD controllers. But now we can set the default value to take advantage of this parallelism and only disable this as needed for specific troublesome hardware. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix inaccurate arcstat_l2_hdr_size calculationsYing Zhu2013-07-291-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on the comments in arc.c we know that buffers can exist both in arc and l2arc, under this circumstance both arc_buf_hdr_t and l2arc_buf_hdr_t will be allocated. However the current logic only cares for memory that l2arc_buf_hdr takes up when the buffer's state transfers from or to arc_l2c_only. This will cause obvious deviations for illumos's zfs version since the sizeof(l2arc_buf_hdr) is larger than ZOL's. We can implement the calcuation in the following simple way: 1. When allocate a l2arc_buf_hdr_t we add its memory consumption instantly and subtract it when we free or evict the l2arc buf. 2. According to l2arc_hdr_stat_add and l2arc_hdr_stat_remove, if the buffer only stays in l2arc we should also add the memory its arc_buf_hdr_t consumes, so we only need to add HDR_SIZE to arcstat_l2_hdr_size since we already concerned with L2HDR_SIZE in step 1 and the same for transfering arc bufs from l2arc only state. The testbox has 2 4-core Intel Xeon CPUs(2.13GHz), with 16GB memory and tests were set upped in the following way: 1. Fdisked a SATA disk into two partitions, one partition for zpool storage and the other one was used as the cache device. 2. Generated some files occupying 14GB altogether in the zpool prepared in step 1 using iozone. 3. Read them all using md5sum and watched the l2arc related statistics in /proc/spl/kstat/zfs/arcstats. After the reading ended the l2_hdr_size and l2_size were shown like this: l2_size 4 4403780608 l2_hdr_size 4 0 which was weird. 4. After applying this patch and reran step 1-3, the results were as following: l2_size 4 4306443264 l2_hdr_size 4 535600 these numbers made sense, on 64-bit systems the sizeof(l2arc_buf_hdr_t) is 16 bytes. Assue all blocks cached by l2arc are 128KB, so 535600/16*128*1024=4387635200, since not all blocks are equal-sized, the theoretical result will be a little bigger, as we can see. Since I'm familiar with systemtap instrumentation tool I used it to examine what had happened. The script looked like this: probe module("zfs").function("arc_chage_state") { if ($new_state == $arc_l2_only) printf("change arc buf to arc_l2_only\n") } It will print out some information each time we call funciton arc_chage_state if the argument new_state is arc_l2_only. I gathered the trace logs and found that none of the arc bufs ran into arc state arc_l2_only when the tests was running, this was the reason why l2_hdr_size in step 3 was 0. The arc bufs fell into arc_l2_only when the pool or the filesystem was offlined. Signed-off-by: Ying Zhu <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]>
* Fix 'zpool list -H' error codeBrian Behlendorf2013-07-231-1/+1
| | | | | | | | Due to an uninitialized variable it was possible for the command 'zpool list -H' to return a non-zero error when there are no pools. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1605
* Add missing -v to usage help for zpool list.Christer Ekholm2013-07-221-1/+1
| | | | Signed-off-by: Brian Behlendorf <[email protected]>
* Add documentation for -T and interval to "zpool list"Christer Ekholm2013-07-221-3/+15
| | | | | | | zpool list has the same options for repeating as zpool iostat has, but that is not documented. This patch adds the documentation. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix arc_adapt() spinning in iterate_supers_type()Brian Behlendorf2013-07-173-0/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | The iterate_supers_type() function which was introduced in the 3.0 kernel was supposed to provide a safe way to call an arbitrary function on all super blocks of a specific type. Unfortunately, because a list_head was used a bug was introduced which made it possible for iterate_supers_type() to get stuck spinning on a super block which was just deactivated. This can occur because when the list head is removed from the fs_supers list it is reinitialized to point to itself. If the iterate_supers_type() function happened to be processing the removed list_head it will get stuck spinning on that list_head. The bug was fixed in the 3.3 kernel by converting the list_head to an hlist_node. However, to resolve the issue for existing 3.0 - 3.2 kernels we detect when a list_head is used. Then to prevent the spinning from occurring the .next pointer is set to the fs_supers list_head which ensures the iterate_supers_type() function will always terminate. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1045 Closes #861 Closes #790
* Fix read-only pool hang on unmountBrian Behlendorf2013-07-171-1/+5
| | | | | | | | | | | During mount a filesystem dataset would have the MS_RDONLY bit incorrectly cleared even if the entire pool was read-only. There is existing to code to handle this case but it was being run before the property callbacks were registered. To resolve the issue we move this read-only code after the callback registration. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1338
* Fix zfsctl_expire_snapshot() deadlockBrian Behlendorf2013-07-121-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is possible for an automounted snapshot which is expiring to deadlock with a manual unmount of the snapshot. This can occur because taskq_cancel_id() will block if the task is currently executing until it completes. But it will never complete because zfsctl_unmount_snapshot() is holding the zsb->z_ctldir_lock which zfsctl_expire_snapshot() must acquire. ---------------------- z_unmount/0:2153 --------------------- mutex_lock <blocking on zsb->z_ctldir_lock> zfsctl_unmount_snapshot zfsctl_expire_snapshot taskq_thread ------------------------- zfs:10690 ------------------------- taskq_wait_id <waiting for z_unmount to exit> taskq_cancel_id __zfsctl_unmount_snapshot zfsctl_unmount_snapshot <takes zsb->z_ctldir_lock> zfs_unmount_snap zfs_ioc_destroy_snaps_nvl zfsdev_ioctl do_vfs_ioctl We resolve the deadlock by dropping the zsb->z_ctldir_lock before calling __zfsctl_unmount_snapshot(). The lock is only there to prevent concurrent modification to the zsb->z_ctldir_snaps AVL tree. Moreover, we're careful to remove the zfs_snapentry_t from the AVL tree before dropping the lock which ensures no other tasks can find it. On failure it's added back to the tree. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Chris Dunlap <[email protected]> Closes #1527
* Add dkms_version conditionalBrian Behlendorf2013-07-111-0/+4
| | | | | | | | | | | By adding a dkms_version conditional it's now possible to specify an exact version of dkms. This is used by the Fedora and EPEL yum repositories to ensure the patched version of dkms provided by the repository is installed. The patched version of dkms ensures that the spl modules are built before the zfs modules. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1466
* Improve N-way mirror performanceBrian Behlendorf2013-07-111-3/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The read bandwidth of an N-way mirror can by increased by 50%, and the IOPs by 10%, by more carefully selecting the preferred leaf vdev. The existing algorthm selects a perferred leaf vdev based on offset of the zio request modulo the number of members in the mirror. It assumes the drives are of equal performance and that spreading the requests randomly over both drives will be sufficient to saturate them. In practice this results in the leaf vdevs being under utilized. Utilization can be improved by preferentially selecting the leaf vdev with the least pending IO. This prevents leaf vdevs from being starved and compensates for performance differences between disks in the mirror. Faster vdevs will be sent more work and the mirror performance will not be limitted by the slowest drive. In the common case where all the pending queues are full and there is no single least busy leaf vdev a batching stratagy is employed. Of the N least busy vdevs one is selected with equal probability to be the preferred vdev for T microseconds. Compared to randomly selecting a vdev to break the tie batching the requests greatly improves the odds of merging the requests in the Linux elevator. The testing results show a significant performance improvement for all four workloads tested. The workloads were generated using the fio benchmark and are as follows. 1) 1MB sequential reads from 16 threads to 16 files (MB/s). 2) 4KB sequential reads from 16 threads to 16 files (MB/s). 3) 1MB random reads from 16 threads to 16 files (IOP/s). 4) 4KB random reads from 16 threads to 16 files (IOP/s). | Pristine | With 1461 | | Sequential Random | Sequential Random | | 1MB 4KB 1MB 4KB | 1MB 4KB 1MB 4KB | | MB/s MB/s IO/s IO/s | MB/s MB/s IO/s IO/s | ---------------+-----------------------+------------------------+ 2 Striped | 226 243 11 304 | 222 255 11 299 | 2 2-Way Mirror | 302 324 16 534 | 433 448 23 571 | 2 3-Way Mirror | 429 458 24 714 | 648 648 41 808 | 2 4-Way Mirror | 562 601 36 849 | 816 828 82 926 | Signed-off-by: Brian Behlendorf <[email protected]> Closes #1461
* Add new kstat for monitoring time in dmu_tx_assignPrakash Surya2013-07-113-0/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change adds a new kstat to gain some visibility into the amount of time spent in each call to dmu_tx_assign. A histogram is exported via a new dmu_tx_assign_histogram-$POOLNAME file. The information contained in this histogram is the frequency dmu_tx_assign took to complete given an interval range. For example, given the below histogram file: $ cat /proc/spl/kstat/zfs/dmu_tx_assign_histogram-tank 12 1 0x01 32 1536 19792068076691 20516481514522 name type data 1 us 4 859 2 us 4 252 4 us 4 171 8 us 4 2 16 us 4 0 32 us 4 2 64 us 4 0 128 us 4 0 256 us 4 0 512 us 4 0 1024 us 4 0 2048 us 4 0 4096 us 4 0 8192 us 4 0 16384 us 4 0 32768 us 4 1 65536 us 4 1 131072 us 4 1 262144 us 4 4 524288 us 4 0 1048576 us 4 0 2097152 us 4 0 4194304 us 4 0 8388608 us 4 0 16777216 us 4 0 33554432 us 4 0 67108864 us 4 0 134217728 us 4 0 268435456 us 4 0 536870912 us 4 0 1073741824 us 4 0 2147483648 us 4 0 one can see most calls to dmu_tx_assign completed in 32us or less, but a few outliers did not. Specifically, 4 of the calls took between 262144us and 131072us. This information is difficult, if not impossible, to gather without this change. Signed-off-by: Prakash Surya <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1584
* Log pool suspension warnings to the consoleBrian Behlendorf2013-07-101-0/+3
| | | | | | | | | In the event that a pool gets suspended log this information to the console. This is critical information and we want to make sure it gets logged. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1555
* Use GFP_NOIO in vdev_disk_io_flush()Brian Behlendorf2013-07-101-1/+1
| | | | | | | | | To avoid a potential deadlock when using a zvol as a swap device prevent vdev_disk_io_flush() from performing IO during the bio_alloc(). Signed-off-by: Brian Behlendorf <[email protected]> Closes #1508
* Fix zpool_read_label()Brian Behlendorf2013-07-091-1/+1
| | | | | | | | | | | | | | | | | | | The zpool_read_label() function was subtly broken due to a difference of behavior in fstat64(2) on Solaris vs Linux. Under Solaris when a block device is stat'ed the st_size field will contain the size of the device in bytes. Under Linux this is only true for regular file and symlinks. A compatibility function called fstat64_blk(2) was added which can be used when the Solaris behavior is required. This flaw was never noticed because the only time we would need to use the device size is when the first two labels are damaged. I noticed this issue while adding the zpool_clear_label() function which is similar in design and does require us to write all the labels. Signed-off-by: Brian Behlendorf <[email protected]>
* Add FreeBSD 'zpool labelclear' commandDmitry Khasanov2013-07-095-2/+187
| | | | | | | | | | | | | The FreeBSD implementation of zfs adds the 'zpool labelclear' command. Since this functionality is helpful and straight forward to add it is being included in ZoL. References: freebsd/freebsd@119a041dc9230275239a8de68c534c0754181e7e Ported-by: Dmitry Khasanov <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1126
* Readd zpool_clear_label() from OpenSolarisDmitry Khasanov2013-07-091-0/+30
| | | | | | | | | | | | | | | This patch restores the zpool_clear_label() function from OpenSolaris. This was removed by commit d603ed6 because it wasn't clear we had a use for it in ZoL. However, this functionality is a prerequisite for adding the 'zpool labelclear' command from FreeBSD. As part of bringing this change in the zpool_clear_label() function was changed to use fstat64_blk(2) for compatibility with Linux. Signed-off-by: Brian Behlendorf <[email protected]> Issue #1126
* zdb: enhancement - Display SA xattrs.Tim Chase2013-07-091-0/+56
| | | | | | | | | | | | | | | | If the znode has SA xattrs, display them following the other standard attributes. The format used is similar to that used when listing the contents of a ZAP. It is as follows: $ zdb -vvv <pool>/<dataset> <object> ... SA xattrs: <size> bytes, <number> entries <name1> = <value1> <name2> = <value2> ... Signed-off-by: Brian Behlendorf <[email protected]> Closes #1581
* Avoid abort() in vn_rdwr(): libzpool/kernel.cMike Leddy2013-07-091-1/+1
| | | | | | | | Make sure that buffer is aligned to 512 bytes on linux so that pread call combined with O_DIRECT does not return EINVAL. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1570
* Improve code in arc_buf_remove_refYing Zhu2013-07-091-1/+2
| | | | | | | | | | When we remove references of arc bufs in the arc_anon state we needn't take its header's hash_lock, so postpone it to where we really need it to avoid unnecessary invocations of function buf_hash. Signed-off-by: Ying Zhu <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1557
* Update zio.cShen Yan2013-07-091-1/+1
| | | | | | | The cv_wait_io is used to account io time instead of cv_wait. Signed-off-by: Brian Behlendorf <[email protected]> Closes #1566
* Fix the comment in zfs.hShen Yan2013-07-091-2/+2
| | | | | | | The path to code is also changed in zfsonlinux. Signed-off-by: Brian Behlendorf <[email protected]> Issues #1566
* Add zfs_autoimport_disable tunableBrian Behlendorf2013-07-091-0/+8
| | | | | | | | | | There are times when it is desirable for zfs to not automatically populate the spa namespace at module load time using the pools in the /etc/zfs/zpool.cache file. The zfs_autoimport_disable module option has been added to control this behavior. Signed-off-by: Brian Behlendorf <[email protected]> Issue #330