summaryrefslogtreecommitdiffstats
path: root/module/zfs/zio.c
Commit message (Collapse)AuthorAgeFilesLines
* Pre-allocate vdev I/O buffersBrian Behlendorf2012-08-271-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The vdev queue layer may require a small number of buffers when attempting to create aggregate I/O requests. Rather than attempting to allocate them from the global zio buffers, which is slow under memory pressure, it makes sense to pre-allocate them because... 1) These buffers are short lived. They are only required for the life of a single I/O at which point they can be used by the next I/O. 2) The maximum number of concurrent buffers needed by a vdev is small. It's roughly limited by the zfs_vdev_max_pending tunable which defaults to 10. By keeping a small list of these buffer per-vdev we can ensure one is always available when we need it. This significantly reduces contention on the vq->vq_lock, because we no longer need to perform a slow allocation under this lock. This is particularly important when memory is already low on the system. It would probably be wise to extend the use of these buffers beyond aggregate I/O and in to the raidz implementation. The inability to quickly allocate buffer for the parity stripes could result in similiar problems. Signed-off-by: Brian Behlendorf <[email protected]>
* Illumos #1909: disk sync write perf regression when slog is used post oi_148Brian Behlendorf2012-04-191-5/+14
| | | | | | | | | | | | | | | | | | | Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Eric Schrock <[email protected]> Reviewed by: Robert Mustacchi <[email protected]> Reviewed by: Bill Pijewski <[email protected]> Reviewed by: Richard Elling <[email protected]> Reviewed by: Steve Gonczi <[email protected]> Reviewed by: Garrett D'Amore <[email protected]> Reviewed by: Dan McDonald <[email protected]> Reviewed by: Albert Lee <[email protected]> Approved by: Eric Schrock <[email protected]> Refererces to Illumos issue: https://www.illumos.org/issues/1909 Signed-off-by: Brian Behlendorf <[email protected]> Closes #680
* Add zio constructor/destructorBrian Behlendorf2012-03-211-15/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add a standard zio constructor and destructor. Normally, this is done to reduce to cost of allocating a new structure by reducing expensive operations such as memory allocations. However, in this case none of the operations moved out of zio_create() were really very expensive. This change was principly made as a debug patch (and workaround) for a zio_destroy() race. The is good evidence that zio_create() is reinitializing a mutex which is really still in use by another thread. This would completely explain the observed symptoms in the issue report. This patch doesn't fix the root cause of the race, but it should make it less likely by only initializing the mutex once in the constructor. Also, this particular flaw might have gone unnoticed in other zfs implementations due to the specific implementation details of Linux ticket spinlocks. Once the real root cause is determined and resolved this change can be safely reverted. Until then this should help workaround the issue. Signed-off-by: Brian Behlendorf <[email protected]> Issue #496
* Revert "Add zio constructor/destructor"Brian Behlendorf2012-03-211-62/+15
| | | | | | | | | | | | | | | | | | | This patch was slightly flawed and allowed for zio->io_logical to potentially not be reinitialized for a new zio. This could lead to assertion failures in specific cases when debugging is enabled (--enable-debug) and I/O errors are encountered. It may also have caused problems when issues logical I/Os. Since we want to make sure this workaround can be easily removed in the future (when we have the real fix). I'm reverting this change and applying a new version of the patch which includes the zio->io_logical fix. This reverts commit 2c6d0b1e07b0265f0661ed7851d3aa8d3e75e7a9. Signed-off-by: Brian Behlendorf <[email protected]> Issue #602 Issue #604
* Add zio constructor/destructorBrian Behlendorf2012-03-071-15/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add a standard zio constructor and destructor. Normally, this is done to reduce to cost of allocating a new structure by reducing expensive operations such as memory allocations. However, in this case none of the operations moved out of zio_create() were really very expensive. This change was principly made as a debug patch (and workaround) for a zio_destroy() race. The is good evidence that zio_create() is reinitializing a mutex which is really still in use by another thread. This would completely explain the observed symptoms in the issue report. This patch doesn't fix the root cause of the race, but it should make it less likely by only initializing the mutex once in the constructor. Also, this particular flaw might have gone unnoticed in other zfs implementations due to the specific implementation details of Linux ticket spinlocks. Once the real root cause is determined and resolved this change can be safely reverted. Until then this should help workaround the issue. Signed-off-by: Brian Behlendorf <[email protected]> Issue #496
* Illumos #734: Use taskq_dispatch_ent() interfaceGarrett D'Amore2011-12-141-5/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | It has been observed that some of the hottest locks are those of the zio taskqs. Contention on these locks can limit the rate at which zios are dispatched which limits performance. This upstream change from Illumos uses new interface to the taskqs which allow them to utilize a prealloc'ed taskq_ent_t. This removes the need to perform an allocation at dispatch time while holding the contended lock. This has the effect of improving system performance. Reviewed by: Albert Lee <[email protected]> Reviewed by: Richard Lowe <[email protected]> Reviewed by: Alexey Zaytsev <[email protected]> Reviewed by: Jason Brian King <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Adam Leventhal <[email protected]> Approved by: Gordon Ross <[email protected]> References to Illumos issue: https://www.illumos.org/issues/734 Ported-by: Prakash Surya <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #482
* Improve meta data performanceBrian Behlendorf2011-11-031-8/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Profiling the system during meta data intensive workloads such as creating/removing millions of files, revealed that the system was cpu bound. A large fraction of that cpu time was being spent waiting on the virtual address space spin lock. It turns out this was caused by certain heavily used kmem_caches being backed by virtual memory. By default a kmem_cache will dynamically determine the type of memory used based on the object size. For large objects virtual memory is usually preferable and for small object physical memory is a better choice. See the spl_slab_alloc() function for a longer discussion on this. However, there is a certain amount of gray area when defining a 'large' object. For the following caches it turns out they were just over the line: * dnode_cache * zio_cache * zio_link_cache * zio_buf_512_cache * zfs_data_buf_512_cache Now because we know there will be a lot of churn in these caches, and because we know the slabs will still be reasonably sized. We can safely request with the KMC_KMEM flag that the caches be backed with physical memory addresses. This entirely avoids the need to serialize on the virtual address space lock. As a bonus this also reduces our vmalloc usage which will be good for 32-bit kernels which have a very small virtual address space. It will also probably be good for interactive performance since unrelated processes could also block of this same global lock. Finally, we may see less cpu time being burned in the arc_reclaim and txg_sync_threads. Signed-off-by: Brian Behlendorf <[email protected]> Issue #258
* Illumos #1051: zfs should handle imbalanced lunsGeorge Wilson2011-08-011-1/+21
| | | | | | | | | | | | | | | | | | | | | | Today zfs tries to allocate blocks evenly across all devices. This means when devices are imbalanced zfs will use lots of CPU searching for space on devices which tend to be pretty full. It should instead fail quickly on the full LUNs and move onto devices which have more availability. Reviewed by: Eric Schrock <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Adam Leventhal <[email protected]> Reviewed by: Albert Lee <[email protected]> Reviewed by: Gordon Ross <[email protected]> Approved by: Garrett D'Amore <[email protected]> References to Illumos issue and patch: - https://www.illumos.org/issues/510 - https://github.com/illumos/illumos-gate/commit/5ead3ed965 Signed-off-by: Brian Behlendorf <[email protected]> Issue #340
* Make tgx_sync_thread zio's asyncBrian Behlendorf2011-05-311-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The majority of the recursive operations performed by the dsl are done either in the context of the tgx_sync_thread or during pool import. It is these recursive operations which contribute greatly to the stack depth. When this recursion is coupled with a synchronous I/O in the same context overflow becomes possible. Previously to handle this case I have focused on keeping the individual stack frames as light as possible. This is a good idea as long as it can be done in a way which doesn't overly complicate the code. However, there is a better solution. If we treat all zio's issued by the tgx_sync_thread as async then we can use the tgx_sync_thread stack for the recursive parts, and the zio_* threads for the I/O parts. This effectively doubles our available stack space with the only drawback being a small delay to schedule the I/O. However, in practice the scheduling time is so much smaller than the actual I/O time this isn't an issue. Another benefit of making the zio async is that the zio pipeline is now parallel. That should mean for CPU intensive pipelines such as compression or dedup performance may be improved. With this change in place the worst case stack usage observed so far is 6902 bytes. This is still higher than I'd like but significantly improved. Additional changes to specific functions should improve this further. This change allows us to revent commit 6656bf5 which did some horrible things to the recursive traverse_visitbp() callpath in the name of saving stack.
* Add missing ZFS tunablesBrian Behlendorf2011-05-041-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds module options for all existing zfs tunables. Ideally the average user should never need to modify any of these values. However, in practice sometimes you do need to tweak these values for one reason or another. In those cases it's nice not to have to resort to rebuilding from source. All tunables are visable to modinfo and the list is as follows: $ modinfo module/zfs/zfs.ko filename: module/zfs/zfs.ko license: CDDL author: Sun Microsystems/Oracle, Lawrence Livermore National Laboratory description: ZFS srcversion: 8EAB1D71DACE05B5AA61567 depends: spl,znvpair,zcommon,zunicode,zavl vermagic: 2.6.32-131.0.5.el6.x86_64 SMP mod_unload modversions parm: zvol_major:Major number for zvol device (uint) parm: zvol_threads:Number of threads for zvol device (uint) parm: zio_injection_enabled:Enable fault injection (int) parm: zio_bulk_flags:Additional flags to pass to bulk buffers (int) parm: zio_delay_max:Max zio millisec delay before posting event (int) parm: zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (bool) parm: zil_replay_disable:Disable intent logging replay (int) parm: zfs_nocacheflush:Disable cache flushes (bool) parm: zfs_read_chunk_size:Bytes to read per chunk (long) parm: zfs_vdev_max_pending:Max pending per-vdev I/Os (int) parm: zfs_vdev_min_pending:Min pending per-vdev I/Os (int) parm: zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int) parm: zfs_vdev_time_shift:Deadline time shift for vdev I/O (int) parm: zfs_vdev_ramp_rate:Exponential I/O issue ramp-up rate (int) parm: zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int) parm: zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int) parm: zfs_vdev_scheduler:I/O scheduler (charp) parm: zfs_vdev_cache_max:Inflate reads small than max (int) parm: zfs_vdev_cache_size:Total size of the per-disk cache (int) parm: zfs_vdev_cache_bshift:Shift size to inflate reads too (int) parm: zfs_scrub_limit:Max scrub/resilver I/O per leaf vdev (int) parm: zfs_recover:Set to attempt to recover from fatal errors (int) parm: spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp) parm: zfs_zevent_len_max:Max event queue length (int) parm: zfs_zevent_cols:Max event column width (int) parm: zfs_zevent_console:Log events to the console (int) parm: zfs_top_maxinflight:Max I/Os per top-level (int) parm: zfs_resilver_delay:Number of ticks to delay resilver (int) parm: zfs_scrub_delay:Number of ticks to delay scrub (int) parm: zfs_scan_idle:Idle window in clock ticks (int) parm: zfs_scan_min_time_ms:Min millisecs to scrub per txg (int) parm: zfs_free_min_time_ms:Min millisecs to free per txg (int) parm: zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int) parm: zfs_no_scrub_io:Set to disable scrub I/O (bool) parm: zfs_no_scrub_prefetch:Set to disable scrub prefetching (bool) parm: zfs_txg_timeout:Max seconds worth of delta per txg (int) parm: zfs_no_write_throttle:Disable write throttling (int) parm: zfs_write_limit_shift:log2(fraction of memory) per txg (int) parm: zfs_txg_synctime_ms:Target milliseconds between tgx sync (int) parm: zfs_write_limit_min:Min tgx write limit (ulong) parm: zfs_write_limit_max:Max tgx write limit (ulong) parm: zfs_write_limit_inflated:Inflated tgx write limit (ulong) parm: zfs_write_limit_override:Override tgx write limit (ulong) parm: zfs_prefetch_disable:Disable all ZFS prefetching (int) parm: zfetch_max_streams:Max number of streams per zfetch (uint) parm: zfetch_min_sec_reap:Min time before stream reclaim (uint) parm: zfetch_block_cap:Max number of blocks to fetch at a time (uint) parm: zfetch_array_rd_sz:Number of bytes in a array_read (ulong) parm: zfs_pd_blks_max:Max number of blocks to prefetch (int) parm: zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int) parm: zfs_arc_min:Min arc size (ulong) parm: zfs_arc_max:Max arc size (ulong) parm: zfs_arc_meta_limit:Meta limit for arc size (ulong) parm: zfs_arc_reduce_dnlc_percent:Meta reclaim percentage (int) parm: zfs_arc_grow_retry:Seconds before growing arc size (int) parm: zfs_arc_shrink_shift:log2(fraction of arc to reclaim) (int) parm: zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p (int)
* Use KM_PUSHPAGE instead of KM_SLEEPBrian Behlendorf2011-03-221-4/+4
| | | | | | | | | | | | | | | | | | | | | | | It used to be the case that all KM_SLEEP allocations were GFS_NOFS. Unfortunately this often resulted in the kernel being unable to reclaim the ARC, inode, and dentry caches in a timely manor. The fix was to make KM_SLEEP a GFP_KERNEL allocation in the SPL. However, this increases the posibility of deadlocking the system on a zfs write thread. If a zfs write thread attempts to perform an allocation it may trigger synchronous reclaim. This reclaim may attempt to flush dirty data/inode to disk to free memory. Unforunately, this write cannot finish because the write thread which would handle it is holding the previous transaction open. Deadlock. To avoid this all allocations in the zfs write thread path must use KM_PUSHPAGE which prohibits synchronous reclaim for that thread. In this way forward progress in ensured. The risk with this change is I missed updating an allocation for the write threads leaving an increased posibility of deadlock. If any deadlocks remain they will be unlikely but we'll have to make sure they all get fixed.
* Shorten zio_* thread namesBrian Behlendorf2010-11-081-2/+1
| | | | | | Linux kernel thread names are expected to be short. This change shortens the zio thread names to 10 characters leaving a few chracters to append the /<cpuid> to which the thread is bound. For example: z_wr_iss/0.
* Initial zio delay timingBrian Behlendorf2010-10-121-0/+15
| | | | | | | | | | | | | | | | | | | | | | | While there is no right maximum timeout for a disk IO we can start laying the ground work to measure how long they do take in practice. This change simply measures the IO time and if it exceeds 30s an event is posted for 'zpool events'. This value was carefully selected because for sd devices it implies that at least one timeout (SD_TIMEOUT) has occured. Unfortunately, even with FAILFAST set we may retry and request and not get an error. This behavior is strongly dependant on the device driver and how it is hooked in to the scsi error handling stack. However by setting the limit at 30s we can log the event even if no error was returned. Slightly longer term we can start recording these delays perhaps as a simple power-of-two histrogram. This histogram can then be reported as part of the 'zpool status' command when given an command line option. None of this code changes the internal behavior of ZFS. Currently it is simply for reporting excessively long delays.
* Add linux kernel module supportBrian Behlendorf2010-08-311-2/+21
| | | | | | | | | | | Setup linux kernel module support, this includes: - zfs context for kernel/user - kernel module build system integration - kernel module macros - kernel module symbol export - kernel module options Signed-off-by: Brian Behlendorf <[email protected]>
* Fix stack zio_execute()Ned Bass2010-08-311-4/+22
| | | | | | | | | | Implement zio_execute() as a wrapper around the static function __zio_execute() so that we can force __zio_execute() to be inlined. This reduces stack overhead which is important because __zio_execute() is called recursively in several zio code paths. zio_execute() itself cannot be inlined because it is externally visible. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix stack zio_done()Brian Behlendorf2010-08-311-33/+30
| | | | | | | | | Eliminated local variables pointing to members of the zio struct. Just refer to the struct members directly. This saved about 32 bytes per call, but this function can be called recurisvely up to 19 levels deep, so we potentially save up to 608 bytes. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix stack inlineBrian Behlendorf2010-08-311-1/+2
| | | | | | | | Decrease stack usage for various call paths by forcing certain functions to be inlined. By inlining the functions the overhead of a new stack frame is removed at the cost of increased code size. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix zio_taskq_dispatch to use TQ_NOSLEEPBrian Behlendorf2010-08-311-3/+4
| | | | | | | | | | The zio_taskq_dispatch() function may be called at interrupt time and it is critical that we never sleep. Additionally, wrap taskq_dispatch() in a while loop because it may fail. This is non optimal but is OK for now. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix gcc unused variable warningsBrian Behlendorf2010-08-311-2/+2
| | | | | | Gcc -Wall warn: 'unused variable' Signed-off-by: Brian Behlendorf <[email protected]>
* Fix gcc c90 compliance warningsBrian Behlendorf2010-08-271-19/+32
| | | | | | | | Fix non-c90 compliant code, for the most part these changes simply deal with where a particular variable is declared. Under c90 it must alway be done at the very start of a block. Signed-off-by: Brian Behlendorf <[email protected]>
* Update to onnv_147Brian Behlendorf2010-08-261-1/+22
| | | | | This is the last official OpenSolaris tag before the public development tree was closed.
* Update core ZFS code from build 121 to build 141.Brian Behlendorf2010-05-281-208/+782
|
* Rebase master to b117Brian Behlendorf2009-07-021-87/+107
|
* Rebase master to b108Brian Behlendorf2009-02-181-141/+176
|
* Rebase master to b105Brian Behlendorf2009-01-151-3/+27
|
* Move the world out of /zfs/ and seperate out module build treeBrian Behlendorf2008-12-111-0/+2273