aboutsummaryrefslogtreecommitdiffstats
path: root/module/spl
Commit message (Collapse)AuthorAgeFilesLines
* Allow spawning a new thread for TQ_NOQUEUE dispatch with dynamic taskqTim Chase2016-03-171-4/+18
| | | | | | | | | | | | When a TQ_NOQUEUE dispatch is done on a dynamic taskq, allow another thread to be spawned. This will cause TQ_NOQUEUE to behave similarly as it does with non-dynamic taskqs. Add support for TQ_NOQUEUE to taskq_dispatch_ent(). Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #530
* Add rw_tryupgrade()Brian Behlendorf2016-03-101-60/+0
| | | | | | | | | | | | | | | | | | | | This implementation of rw_tryupgrade() behaves slightly differently from its counterparts on other platforms. It drops the RW_READER lock and then acquires the RW_WRITER lock leaving a small window where no lock is held. On other platforms the lock is never released during the upgrade process. This is necessary under Linux because the kernel does not provide an upgrade function. There are currently no callers in the ZFS code where this change in behavior is a problem. In fact, in most cases the code is already written such that if the upgrade fails the RW_READER lock is dropped and the caller blocks waiting to acquire the lock as RW_WRITER. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Matthew Thode <[email protected]> Closes zfsonlinux/zfs#4388 Closes #534
* random_get_pseudo_bytes() need not provide cryptographic strength entropyRichard Yao2016-02-171-0/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Perf profiling of dd on a zvol revealed that my system spent 3.16% of its time in random_get_pseudo_bytes(). No SPL consumers need cryptographic strength entropy, so we can reduce our overhead by changing the implementation to utilize a fast PRNG. The Linux kernel did not export a suitable PRNG function until it exported get_random_int() in Linux 3.10. While we could implement an autotools check so that we use it when it is available or even try to access the symbol on older kernels where it is not exported using the fact that it is exported on newer ones as justification, we can instead implement our own pseudo-random data generator. For this purpose, I have written one based on a 128-bit pseudo-random number generator proposed in a paper by Sebastiano Vigna that itself was based on work by the late George Marsaglia. http://vigna.di.unimi.it/ftp/papers/xorshiftplus.pdf Profiling the same benchmark with an earlier variant of this patch that used a slightly different generator (roughly same number of instructions) by the same author showed that time spent in random_get_pseudo_bytes() dropped to 0.06%. That is a factor of 50 improvement. This particular generator algorithm is also well known to be fast: http://xorshift.di.unimi.it/#speed The benchmark numbers there state that it runs at 1.12ns/64-bits or 7.14 GBps of throughput on an Intel Core i7-4770 in what is presumably a single-threaded context. Using it in `random_get_pseudo_bytes()` in the manner I have will probably not reach that level of performance, but it should be fairly high and many times higher than the Linux `get_random_bytes()` function that we use now, which runs at 16.3 MB/s on my Intel Xeon E3-1276v3 processor when measured by using dd on /dev/urandom. Also, putting this generator's seed into per-CPU variables allows us to eliminate overhead from both spin locks and CPU memory barriers, which is NUMA friendly. We could have alternatively modified consumers to use something like `gethrtime() % 3` as suggested by both Matthew Ahrens and Tim Chase, but that has a few potential problems that this approach avoids: 1. Switching to `gethrtime() % 3` in hot code paths today requires diverging from illumos-gate and does nothing about potential future patches from illumos-gate that call our slow `random_get_pseudo_bytes()` in different hot code paths. Reimplementing `random_get_pseudo_bytes()` with a per-CPU PRNG avoids both of those things entirely, which means less work for us in the future. 2. Looking at the code that implements `gethrtime()`, I think it is unlikely to be faster than this per-CPU PRNG implementation of `random_get_pseudo_bytes()`. It would be best to go with something fast now so that there is no point in revisiting this from a performance perspective. 3. `gethrtime() % 3` can vary in behavior from system to system based on kernel version, architecture and clock source. In comparison, this per-CPU PRNG is about ~40 lines of code in `random_get_pseudo_bytes()` that should behave consistently across all systems regardless of kernel version, system architecture or machine clock source. It is unlikely that we would ever need to revisit this per-CPU PRNG while the same cannot be said for `gethrtime() % 3`. 4. `gethrtime()` uses CPU memory barriers and maybe atomic instructions depending on the clock source, so replacing `random_get_pseudo_bytes()` with `gethrtime()` in hot code paths could still require a future person working on NUMA scalability to reimplement it anyway while this per-CPU PRNG would not by virtue of using neither CPU memory barriers nor atomic instructions. Note that I did not check various clock sources for the presence of atomic instructions. There is simply too much code to read and given the drawbacks versus this per-cpu PRNG, there is no point in being certain. 5. I have heard of instances where poor quality pseudo-random numbers caused problems for HPC code in ways that took more than a year to identify and were remedied by switching to a higher quality source of pseudo-random numbers. While filesystems are different than HPC code, I do not think it is impossible for us to have instances where poor quality pseudo-random numbers can cause problems. Opting for a well studied PRNG algorithm that passes tests for statistical randomness over changing callers to use `gethrtime() % 3` bypasses the need to think about both whether poor quality pseudo-random numbers can cause problems and the statistical quality of numbers from `gethrtime() % 3`. 6. `gethrtime()` calls `getrawmonotonic()`, which uses seqlocks. This is probably not a huge issue, but anyone using kgdb would never be able to step through a seqlock critical section, which is not a problem either now or with the per-CPU PRNG: https://en.wikipedia.org/wiki/Seqlock The only downside that I can see is that this code's memory requirement is O(N) where N is NR_CPUS, versus the current code and `gethrtime() % 3`, which are O(1), but that should not be a problem. The seeds will use 64KB of memory at the high end (i.e `NR_CPU == 4096`) and 16 bytes of memory at the low end (i.e. `NR_CPU == 1`). In either case, we should only use a few hundred bytes of code for text, especially since `spl_rand_jump()` should be inlined into `spl_random_init()`, which should be removed during early boot as part of "Freeing unused kernel memory". In either case, the memory requirements are minuscule. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #372
* Allow kicking a taskq to spawn more threadsChunwei Chen2016-02-051-0/+60
| | | | | | | | | | | This patch add a module parameter spl_taskq_kick. When writing non-zero value to it, it will scan all the taskq, if a taskq contains a task pending for more than 5 seconds, it will be forced to spawn a new thread. This is use as an emergency recovery from deadlock, not a general solution. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #529
* Remove RLIM64_INFINITY assert in vn_rdwr()Brian Behlendorf2016-01-231-1/+0
| | | | | | | | | Previous commit be29e6a updated kobj_read_file() so it no longer unconditionally passes RLIM64_INFINITY. The vn_rdwr() function needs to be updated accordingly. Signed-off-by: Brian Behlendorf <[email protected]> Issue #513
* kobj_read_file: Return -1 on vn_rdwr() errorRichard Yao2016-01-231-3/+8
| | | | | | | | | | | | | I noticed that the SPL implementation of kobj_read_file is not correct after comparing it with the userland implementation of kobj_read_file() in zfsonlinux/zfs#4104. Note that we no longer pass RLIM64_INFINITY with this, but our vn_rdwr implementation did not support it anyway, so there is no difference. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #513
* Use tsd to store tq for taskq_memberChunwei Chen2016-01-203-56/+61
| | | | | | | | | | | | | To prevent taskq_member holding tq_lock and doing linear search, thus causing contention. We store the taskq pointer to which the thread belongs in tsd. This way taskq_member will not need to touch tq_lock, and tsd has per slot spinlock. So the contention should be reduced greatly. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #500 Closes #504 Closes #505
* Don't hold mutex until release cv in cv_waitChunwei Chen2016-01-121-15/+40
| | | | | | | | | | | | | | | | | | | | | | | | If a thread is holding mutex when doing cv_destroy, it might end up waiting a thread in cv_wait. The waiter would wake up trying to aquire the same mutex and cause deadlock. We solve this by move the mutex_enter to the bottom of cv_wait, so that the waiter will release the cv first, allowing cv_destroy to succeed and have a chance to free the mutex. This would create race condition on the cv_mutex. We use xchg to set and check it to ensure we won't be harmed by the race. This would result in the cv_mutex debugging becomes best-effort. Also, the change reveals a race, which was unlikely before, where we call mutex_destroy while test threads are still holding the mutex. We use kthread_stop to make sure the threads are exit before mutex_destroy. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue zfsonlinux/zfs#4166 Issue zfsonlinux/zfs#4106
* Use spl_fstrans_mark instead of memalloc_noio_saveChunwei Chen2015-12-184-37/+5
| | | | | | | | | | | | | | | | | | | For earlier versions of the kernel with memalloc_noio_save, it only turns off __GFP_IO but leaves __GFP_FS untouched during direct reclaim. This would cause threads to direct reclaim into ZFS and cause deadlock. Instead, we should stick to using spl_fstrans_mark. Since we would explicitly turn off both __GFP_IO and __GFP_FS before allocation, it will work on every version of the kernel. This impacts kernel versions 3.9-3.17, see upstream kernel commit torvalds/linux@934f307 for reference. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #515 Issue zfsonlinux/zfs#4111
* Provide kstat for taskqsTim Chase2015-12-162-0/+269
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch provides 2 new kstats to display task queues: /proc/spl/taskqs-all - Display all task queues /proc/spl/taskqs - Display only "active" task queues A task queue is considered to be "active" if it currently has active (running) threads or if any of its pending, priority, delay or waitq lists are not empty. If the task queue has running threads, displays each thread function's address (symbolically, if possibly) and its argument. If the task queue has a non-empty list of pending, priority or delayed task queue entries (taskq_ent_t), displays each entry's thread function address and arguemnt. If the task queue has any waiters, displays each waiting task's pid. Note: This patch also updates some comments in taskq.h which referred to "taskq_t" when they should have referred to "taskq_ent_t". Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #491
* Fix cstyle issues in spl-taskq.c and taskq.hBrian Behlendorf2015-12-111-34/+41
| | | | | | | This patch only addresses the issues identified by the style checker. It contains no functional changes. Signed-off-by: Brian Behlendorf <[email protected]>
* Don't use tq->tq_lock_flagsChunwei Chen2015-12-111-61/+62
| | | | | | | | | | The flags argument in spin_lock_irqsave is modified out side of spin_lock context. We cannot use a shared variable like tq->tq_lock_flags for them. This patch removes it and uses local variable for the flags. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #506
* Subclass tq_lock to eliminate a lockdep warningOlaf Faaland2015-12-111-21/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | When taskq_dispatch() calls taskq_thread_spawn() to create a new thread for a taskq, linux lockdep warns of possible recursive locking. This is a false positive. One such call chain is as follows, when a taskq needs more threads: taskq_dispatch->taskq_thread_spawn->taskq_dispatch The initial taskq_dispatch() holds tq_lock on the taskq that needed more worker threads. The later call into taskq_dispatch() takes dynamic_taskq->tq_lock. Without subclassing, lockdep believes these could potentially be the same lock and complains. A similar case occurs when taskq_dispatch() then calls task_alloc(). This patch uses spin_lock_irqsave_nested() when taking tq_lock, with one of two new lock subclasses: subclass taskq TQ_LOCK_DYNAMIC dynamic_taskq TQ_LOCK_GENERAL any other Signed-off-by: Olaf Faaland <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #480
* Revert "Make taskq_member() use ->journal_info"Brian Behlendorf2015-12-081-3/+34
| | | | | | | | This reverts commit a430c11f0b1ef16ca5edf3059e4082709277376c. Using journal_info like this can cause a BUG at kernel fs/jbd2/transaction.c:425! Signed-off-by: Brian Behlendorf <[email protected]> Issue #500
* Make taskq_member() use ->journal_infoRichard Yao2015-12-081-34/+3
| | | | | | | | | | | | | | | | | | | | | | The ->journal_info pointer in the task_struct is reserved for use by filesystems and because the kernel can have multiple file systems on the same stack due to direct reclaim, each filesystem that touches ->journal_info in a callback function will save the value at the start of its frame and restore it at the end of its frame. This allows us to safely use ->journal_info to store a pointer to the taskq's struct in taskq threads so that ZFS code paths can detect the presence of a taskq. This could break if the ZFS code were to use taskq_member from the context of direct reclaim. However, there are no such uses of it in that manner, so this is safe. This eliminates an O(N) list traversal under a spinlock with an O(1) unlocked pointer comparison. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: tuxoko <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #500
* Fix race between getf() and areleasef()Richard Yao2015-12-031-0/+13
| | | | | | | | | | | | | | | | | | If a vnode is released asynchronously through areleasef(), it is possible for the user process to reuse the file descriptor before areleasef is called. When this happens, getf() will return a stale reference, any operations in the kernel on that file descriptor will fail (as it is closed) and the operations meant for that fd will never occur from userspace's perspective. We correct this by detecting this condition in getf(), doing a putf on the old file handle, updating the file descriptor and proceeding as if everything was fine. When the areleasef() is done, it will harmlessly decrement the reference counter on the Illumos file handle. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #492
* spl-kmem-cache: include linux/prefetch.h for prefetchw()Dimitri John Ledkov2015-12-021-0/+1
| | | | | | | | This is needed for architectures that do not have a builtin prefetchw() Signed-off-by: Dimitri John Ledkov <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #502
* Remove superfluous `newline` characterloli10K2015-11-131-1/+1
| | | | | | | | | Remove superfluous `newline` character from spl_kmem_cache_magazine_size module parameter description. Signed-off-by: loli10K <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #499
* Fix taskq dynamic spawningtuxoko2015-11-131-14/+11
| | | | | | | | | | | | | | | | | | | | Currently taskq_dispatch() will spawn new task with a condition that the caller is also a member of the taskq. However, under this condition, it will still cause deadlock where a task on tq1 is waiting another thread, who is trying to dispatch a task on tq1. So this patch removes the check. For example when you do: zfs send pp/fs0@001 | zfs recv pp/fs0_copy This will easily deadlock before this patch. Also, move the seq_task check from taskq_thread_spawn() to taskq_thread() because it's not used by the caller from taskq_dispatch(). Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #496
* Don't call kmem_cache_shrink from shrinkerChunwei Chen2015-11-111-6/+1
| | | | | | | | | | | | | Linux slab will automatically free empty slab when number of partial slab is over min_partial, so we don't need to explicitly shrink it. In fact, calling kmem_cache_shrink from shrinker will cause heavy contention on kmem_cache_node->list_lock, to the point that it might cause __slab_free to livelock (see zfsonlinux/zfs#3936) Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes zfsonlinux/zfs#3936 Closes #487
* Fix CPU hotplugBrian Behlendorf2015-10-131-8/+7
| | | | | | | | | | | | | | | | Allocate a kmem cache magazine for every possible CPU which might be added to the system. This ensures that when one of these CPUs is enabled it can be safely used immediately. For many systems the number of online CPUs is identical to the number of present CPUs so this does imply an increased memory footprint. In fact, dynamically allocating the array of magazine pointers instead of using the worst case NR_CPUS can end up decreasing our memory footprint. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Closes #482
* Fix PAX Patch/Grsec SLAB_USERCOPY panicBrian Behlendorf2015-09-281-1/+11
| | | | | | | | | | | | | | | Support grsecurity/PaX kernel configurations where CONFIG_PAX_USERCOPY_SLABS are enabled. When this kernel option is enabled slabs which are used to copy between user and kernel space must be created with SLAB_USERCOPY. Stock Linux kernels do not have a SLAB_USERCOPY definition so this causes no change in behavior for non-PAX-enabled kernels. Verified-by: Wuffleton <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #2977 Issue #3796
* Disable direct reclaim in taskq worker threads on Linux 3.9+Richard Yao2015-09-091-0/+4
| | | | | | | | | | | | | | | Illumos does not have direct reclaim and code run inside taskq worker threads is not designed to deal with it. Allowing direct reclaim inside a worker thread can therefore deadlock. We set PF_MEMALLOC_NOIO through memalloc_noio_save() to indicate to the kernel's reclaim code that we are inside a context where memory allocations cannot be allowed to block on filesystem activity. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#1274 Issue zfsonlinux/zfs#2390 Closes #474
* Create a new thread during recursive taskq dispatch if necessaryTim Chase2015-09-011-3/+28
| | | | | | | | | | | | | When dynamic taskq is enabled and all threads for a taskq are occupied, a recursive dispatch can cause a deadlock if calling thread depends on the recursively-dispatched thread for its return condition. This patch attempts to create a new thread for recursive dispatch when none are available. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #472
* Revert "Create a new thread during recursive taskq dispatch if necessary"Brian Behlendorf2015-08-311-22/+0
| | | | | | | | | | | This reverts commit 076821e due to a locking issue uncovered in subsequent testing. An ASSERT is hit due to tq->tq_nspawn being updated outside the lock. The patch will need to be reworked. VERIFY3(0 == tq->tq_nspawn) failed (0 == -1) Signed-off-by: Brian Behlendorf <[email protected]> Issue #472
* Create a new thread during recursive taskq dispatch if necessaryTim Chase2015-08-311-0/+22
| | | | | | | | | | | | | When dynamic taskq is enabled and all threads for a taskq are occupied, a recursive dispatch can cause a deadlock if calling thread depends on the recursively-dispatched thread for its return condition. This patch attempts to create a new thread for recursive dispatch when none are available. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #472
* Invert minclsyspri and maxclsyspriBrian Behlendorf2015-07-282-4/+3
| | | | | | | | | | | | | | | | | | | | | | | On Linux the meaning of a processes priority is inverted with respect to illumos. High values on Linux indicate a _low_ priority while high value on illumos indicate a _high_ priority. In order to preserve the logical meaning of the minclsyspri and maxclsyspri macros when they are used by the illumos wrapper functions their values have been inverted. This way when changes are merged from upstream illumos we won't need to remember to invert the macro. It could also lead to confusion. Note this change also reverts some of the priorities changes in prior commit 62aa81a. The rational is as follows: spl_kmem_cache - High priority may result in blocked memory allocs spl_system_taskq - May perform I/O for file backed VDEVs spl_dynamic_taskq - New taskq threads should be spawned promptly Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Issue zfsonlinux/zfs#3607
* Remove skc_ref from alloc/free pathsBrian Behlendorf2015-07-241-9/+2
| | | | | | | | | | | | | | | | As described in spl_kmem_cache_destroy() the ->skc_ref count was added to address the case of a cache reap or grow racing with a destroy. They are not strictly needed in the alloc/free paths because consumers of the cache are responsible for not using it while it's being destroyed. Removing this code is desirable because there is some evidence that contention on this atomic negative impacts performance on large-scale NUMA systems. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #463
* Add defclsyspri macroBrian Behlendorf2015-07-232-4/+11
| | | | | | | | | | | | | | | | | | | | | | | Add a new defclsyspri macro which can be used to request the default Linux scheduler priority. Neither the minclsyspri or maxclsyspri map to the default Linux kernel thread priority. This makes it awkward to create taskqs which run with the same priority as the rest of the kernel threads on the system which can lead to performance issues. All SPL callers which previously used minclsyspri or maxclsyspri have been changed to use defclsyspri. The vast majority of callers were part of the test suite which won't have an external impact. The few places where it could impact performance the change was from maxclsyspri to defclsyspri. This makes it more likely the process will be scheduled which may help performance. To facilitate further performance analysis the spl_taskq_thread_priority module option has been added. When disabled (0) all newly created kernel threads will use the default kernel thread priority. When enabled (1) the specified taskq priority will be used. By default this value is enabled (1). Signed-off-by: Brian Behlendorf <[email protected]>
* Default to --disable-debug-kmemBrian Behlendorf2015-07-211-18/+3
| | | | | | | | | | | | | | | | | | The default kmem debugging (--enable-debug-kmem) can severely impact performance on large-scale NUMA systems due to the atomic operations used in the memory accounting. A 32-thread fio test running on a 40-core 80-thread system and performing 100% cached reads with kmem debugging is: Enabled: READ: io=177071MB, aggrb=2951.2MB/s, minb=2951.2MB/s, maxb=2951.2MB/s, Disabled: READ: io=271454MB, aggrb=4524.4MB/s, minb=4524.4MB/s, maxb=4524.4MB/s, Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issues #463
* Support parallel build trees (VPATH builds)Turbo Fredriksson2015-07-171-19/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | Build products from an out of tree build should be written relative to the build directory. Sources should be referred to by their locations in the source directory. This is accomplished by adding the 'src' and 'obj' variables for the module Makefile.am, using relative paths to reference source files, and by setting VPATH when source files are not co-located with the Makefile. This enables the following: $ mkdir build $ cd build $ ../configure $ make -s This change also has the advantage of resolving the following warning which is generated by modern versions of automake. Makefile.am:00: warning: source file 'xxx' is in a subdirectory, Makefile.am:00: but option 'subdir-objects' is disabled Signed-off-by: Turbo Fredriksson <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#1082
* Set TASKQ_DYNAMIC for kmem and system taskqsBrian Behlendorf2015-06-242-5/+5
| | | | | | | | | | | | | | | | | | | Add the TASKQ_DYNAMIC flag to the kmem_cache and system taskqs to reduce the number of idle threads on the system. Additional threads will be created on demand up to the previous maximum thread counts. This should have minimal, if any, impact on performance. This makes the system taskq consistent with illumos which is always created as a dynamic taskq with up to 64 threads. The task limits for the kmem_cache have been increased to avoid any unnessisary throttling and to keep a larger reserve of task_t structures on the free list. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #458
* Add TASKQ_DYNAMIC featureBrian Behlendorf2015-06-241-66/+226
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting the TASKQ_DYNAMIC flag will create a taskq with dynamic semantics. Initially only a single worker thread will be created to service tasks dispatched to the queue. As additional threads are needed they will be dynamically spawned up to the max number specified by 'nthreads'. When the threads are no longer needed, because the taskq is empty, they will automatically terminate. Due to the low cost of creating and destroying threads under Linux by default new threads and spawned and terminated aggressively. There are two modules options which can be tuned to adjust this behavior if needed. * spl_taskq_thread_sequential - The number of sequential tasks, without interruption, which needed to be handled by a worker thread before a new worker thread is spawned. Default 4. * spl_taskq_thread_dynamic - Provides the ability to completely disable the use of dynamic taskqs on the system. This is provided for the purposes of debugging and troubleshooting. Default 1 (enabled). This behavior is fundamentally consistent with the dynamic taskq implementation found in both illumos and FreeBSD. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #458
* Rename cv_wait_interruptible() to cv_wait_sig()Brian Behlendorf2015-06-101-34/+47
| | | | | | | | | | | | | | | | | Commit f752b46e added the cv_wait_interruptible() function to allow condition variables to be woken by signals. This function and its timed wait counterpart should have been named cv_wait_sig() to match the illumos interface which provides the same functionality. This patch renames the symbol but leaves a #define compatibility wrapper in place until the ZFS code can be moved to the correct name. This patch also makes a small number of cosmetic changes to make the condvar source and header cstyle clean. Signed-off-by: Brian Behlendorf <[email protected]> Closes #456
* Make taskq_wait() block until the queue is emptyChris Dunlop2015-06-091-38/+55
| | | | | | | | | | | | | | | | | | | | | | Under Illumos taskq_wait() returns when there are no more tasks in the queue. This behavior differs from ZoL and FreeBSD where taskq_wait() returns when all the tasks in the queue at the beginning of the taskq_wait() call are complete. New tasks added whilst taskq_wait() is running will be ignored. This difference in semantics makes it possible that new subtle issues could be introduced when porting changes from Illumos. To avoid that possibility the taskq_wait() function is being updated such that it blocks until the queue in empty. The previous behavior remains available through the taskq_wait_outstanding() interface. Note that this function was previously called taskq_wait_all() but has been renamed to avoid confusion. Signed-off-by: Chris Dunlop <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #455
* Fix cstyle issues in spl-tsd.cBrian Behlendorf2015-04-241-20/+21
| | | | | | | This patch only addresses the issues identified by the style checker in spl-tsd.c. It contains no functional changes. Signed-off-by: Brian Behlendorf <[email protected]>
* Make tsd_set(key, NULL) remove the tsd entry for current threadChunwei Chen2015-04-241-0/+63
| | | | | | | | | | To prevent leaking tsd entries, we make tsd_set(key, NULL) remove the tsd entry for the current thread. This is alright since tsd_get() returns NULL when the entry doesn't exist. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #443
* Implement areleasef()Richard Yao2015-04-241-5/+14
| | | | | | Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #449
* vn_getf/vn_releasef should not accept negative file descriptorsRichard Yao2015-04-241-0/+6
| | | | | | | | | | | | | | | | | | | C type coercion rules require that negative numbers be converted into positive numbers via wraparound such that a negative -1 becomes a positive 1. This causes vn_getf to return a file handle when it should return NULL whenever a positive file descriptor existed with the same value. We should check for a negative file descriptor and return NULL instead. This was caught by ClusterHQ's unit testing. Reference: http://stackoverflow.com/questions/50605/signed-to-unsigned-conversion-in-c-is-it-always-safe Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Andriy Gapon <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #450
* Clear PF_FSTRANS over vfs_sync()Brian Behlendorf2015-04-071-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When layered on XFS the following warning will be emitted under CentOS7 when entering vfs_fsync() with PF_FSTRANS already set. This is not an issue for other stock Linux file systems and the warning was removed for newer kernels. However, to avoid triggering this error PF_FSTRANS is cleared and then reset in vn_fsync(). WARNING: at fs/xfs/xfs_aops.c:968 xfs_vm_writepage+0x5ab/0x5c0 Call Trace: [<ffffffff8105dee1>] warn_slowpath_common+0x61/0x80 [<ffffffffa01706fb>] xfs_vm_writepage+0x5ab/0x5c0 [xfs] [<ffffffff8114b833>] __writepage+0x13/0x50 [<ffffffff8114c341>] write_cache_pages+0x251/0x4d0 [<ffffffff8114c60d>] generic_writepages+0x4d/0x80 [<ffffffffa016fc93>] xfs_vm_writepages+0x43/0x50 [xfs] [<ffffffff8114d68e>] do_writepages+0x1e/0x40 [<ffffffff81142bd5>] __filemap_fdatawrite_range+0x65/0x80 [<ffffffff81142cea>] filemap_write_and_wait_range+0x2a/0x70 [<ffffffffa017a5b6>] xfs_file_fsync+0x66/0x1f0 [xfs] [<ffffffff811df54b>] vfs_fsync+0x2b/0x40 [<ffffffffa03a88bd>] vn_fsync+0x2d/0x90 [spl] [<ffffffffa0520c33>] spa_config_sync+0x503/0x680 [zfs] [<ffffffffa0520ee4>] spa_config_update+0x134/0x170 [zfs] [<ffffffffa0520eba>] spa_config_update+0x10a/0x170 [zfs] [<ffffffffa051c54f>] spa_import+0x5bf/0x7b0 [zfs] [<ffffffffa055c754>] zfs_ioc_pool_import+0x104/0x150 [zfs] [<ffffffffa056294f>] zfsdev_ioctl+0x4cf/0x5c0 [zfs] [<ffffffffa0562480>] ? pool_status_check+0xf0/0xf0 [zfs] [<ffffffff811c2c85>] do_vfs_ioctl+0x2e5/0x4c0 [<ffffffff811c2f01>] SyS_ioctl+0xa1/0xc0 [<ffffffff815f3219>] system_call_fastpath+0x16/0x1b Signed-off-by: Brian Behlendorf <[email protected]>
* Don't allow shrinking a PF_FSTRANS contextTim Chase2015-04-031-0/+6
| | | | | | | | | | | | | | Avoid deadlocks when entering the shrinker from a PF_FSTRANS context. This patch also reverts commit d0d5dd7 which added MUTEX_FSTRANS. Its use has been deprecated within ZFS as it was an ineffective mechanism to eliminate deadlocks. Among other things, it introduced the need for strict ordering of mutex locking and unlocking in order that the PF_FSTRANS flag wouldn't set incorrectly. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #446
* Retire spl_module_init()/spl_module_fini()Brian Behlendorf2015-02-271-28/+4
| | | | | | | | | | | | | | In the original implementation of the SPL wrappers were provided for module initialization and cleanup. This was done to abstract away any compatibility code which might be needed for the SPL. As it turned out the only significant compatibility issue was that the default pwd during module load differed under Illumos and Linux. Since this is such as minor thing and the wrappers complicate the code they are being retired. Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#2985
* Fix spl_hostid module parameterChunwei Chen2015-02-041-1/+3
| | | | | | | | | | | Currently, spl_hostid module parameter doesn't do anything, because it will always be overwritten when calling into hostid_read(). Instead, we should only call into hostid_read() when spl_hostid is not zero, just as the comment describes. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #427
* Optimize vmem_alloc() retry pathBrian Behlendorf2015-02-021-1/+11
| | | | | | | | | | | | | | | | | For performance reasons the reworked kmem code maps vmem_alloc() to kmalloc_node() for allocations less than spa_kmem_alloc_max. This allows for more concurrency in the system and less contention of the virtual address space. Generally, this is a good thing. However, in the case when the kmalloc_node() fails it makes little sense to retry it using kmalloc_node() again. It will likely fail in exactly the same way. A smarter strategy is to abandon this optimization and retry using spl_vmalloc() which is very likely to succeed. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Closes #428
* Fix GFP_KERNEL allocations flagsBrian Behlendorf2015-01-213-4/+4
| | | | | | | | | | The kmem_vasprintf(), kmem_vsprintf(), kobj_open_file(), and vn_openat() functions should all use the kmem_flags_convert() function to generate the GFP_* flags. This ensures that they can be safely called in any context and the correct flags will be used. Signed-off-by: Brian Behlendorf <[email protected]> Closes #426
* Use __get_free_pages() for emergency objectsBrian Behlendorf2015-01-161-10/+12
| | | | | | | | | | | The __get_free_pages() function must be used in place of kmalloc() to ensure the __GFP_COMP is strictly honored. This is due to kmalloc() being layered on the generic Linux slab caches. It wasn't until recently that all caches were created using __GFP_COMP. This means that it is possible for a kmalloc() which passed the __GFP_COMP flag to be returned a non-compound allocation. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix kmem cache deadlock logicBrian Behlendorf2015-01-161-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kmem cache implementation always adds new slabs by dispatching a task to the spl_kmem_cache taskq to perform the allocation. This is done because large slabs must be allocated using vmalloc(). It is possible these allocations will block on IO because the GFP_NOIO flag is not honored. This can result in a deadlock. Therefore, a deadlock detection strategy was implemented to deal with this case. When it is determined, by timeout, that the spl_kmem_cache thread has deadlocked attempting to add a new slab. Then all callers attempting to allocate from the cache fall back to using kmalloc() which does honor all passed flags. This logic was correct but an optimization in the code allowed for a deadlock. Because only slabs backed by vmalloc() can deadlock in the way described above. An optimization was made to only invoke this deadlock detection code for vmalloc() backed caches. This had the advantage of making it easy to distinguish these objects when they were freed. But this isn't strictly safe. If all the spl_kmem_cache threads end up deadlocked than we can't grow any of the other caches either. This can once again result in a deadlock if memory needs to be allocated from one of these other caches to ensure forward progress. The fix here is to remove the optimization which limits this fall back allocation stratagy to vmalloc() backed caches. Doing this means we may need to take the cache lock in spl_kmem_cache_free() call path. But this small cost can be mitigated by ignoring objects with virtual addresses. For good measure the default number of spl_kmem_cache threads has been increased from 1 to 4, and made tunable. This alone wouldn't resolve the original issue since it's still possible for all the threads to be deadlocked. However, it does help responsiveness by ensuring that a single deadlocked spl_kmem_cache thread doesn't block allocations from other caches until the timeout is reached. Signed-off-by: Brian Behlendorf <[email protected]>
* Refine slab cache sizingBrian Behlendorf2015-01-161-35/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change is designed to improve the memory utilization of slabs by more carefully setting their size. The way the code currently works is problematic for slabs which contain large objects (>1MB). This is due to slabs being unconditionally rounded up to a power of two which may result in unused space at the end of the slab. The reason the existing code rounds up every slab is because it assumes it will backed by the buddy allocator. Since the buddy allocator can only performs power of two allocations this is desirable because it avoids wasting any space. However, this logic breaks down if slab is backed by vmalloc() which operates at a page level granularity. In this case, the optimal thing to do is calculate the minimum required slab size given certain constraints (object size, alignment, objects/slab, etc). Therefore, this patch reworks the spl_slab_size() function so that it sizes KMC_KMEM slabs differently than KMC_VMEM slabs. KMC_KMEM slabs are rounded up to the nearest power of two, and KMC_VMEM slabs are allowed to be the minimum required size. This change also reduces the default number of objects per slab. This reduces how much memory a single cache object can pin, which can result in significant memory saving for highly fragmented caches. But depending on the workload it may result in slabs being allocated and freed more frequently. In practice, this has been shown to be a better default for most workloads. Also the maximum slab size has been reduced to 4MB on 32-bit systems. Due to the limited virtual address space it's critical the we be as frugal as possible. A limit of 4M still lets us reasonably comfortably allocate a limited number of 1MB objects. Finally, the kmem:slab_small and kmem:slab_large SPLAT tests were extended to provide better test coverage of various object sizes and alignments. Caches are created with random parameters and their basic functionality is verified by allocating several slabs worth of objects. Signed-off-by: Brian Behlendorf <[email protected]>
* Reduce kmem cache deadlock thresholdBrian Behlendorf2015-01-161-1/+1
| | | | | | | | | Reduce the threshold for detecting a kmem cache deadlock by 10x from HZ to HZ/10. The reduced value is still several orders of magnitude large enough to avoid being triggered incorrectly. By reducing it we allow the system to resolve the issue more quickly. Signed-off-by: Brian Behlendorf <[email protected]>
* Make slab reclaim more aggressiveBrian Behlendorf2015-01-161-29/+47
| | | | | | | | | | Many people have noticed that the kmem cache implementation is slow to release its memory. This patch makes the reclaim behavior more aggressive by immediately freeing a slab once it is empty. Unused objects which are cached in the magazines will still prevent a slab from being freed. Signed-off-by: Brian Behlendorf <[email protected]>