aboutsummaryrefslogtreecommitdiffstats
path: root/module
Commit message (Collapse)AuthorAgeFilesLines
* Make taskq_member() use ->journal_infoRichard Yao2015-12-081-34/+3
| | | | | | | | | | | | | | | | | | | | | | The ->journal_info pointer in the task_struct is reserved for use by filesystems and because the kernel can have multiple file systems on the same stack due to direct reclaim, each filesystem that touches ->journal_info in a callback function will save the value at the start of its frame and restore it at the end of its frame. This allows us to safely use ->journal_info to store a pointer to the taskq's struct in taskq threads so that ZFS code paths can detect the presence of a taskq. This could break if the ZFS code were to use taskq_member from the context of direct reclaim. However, there are no such uses of it in that manner, so this is safe. This eliminates an O(N) list traversal under a spinlock with an O(1) unlocked pointer comparison. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: tuxoko <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #500
* Fix race between getf() and areleasef()Richard Yao2015-12-031-0/+13
| | | | | | | | | | | | | | | | | | If a vnode is released asynchronously through areleasef(), it is possible for the user process to reuse the file descriptor before areleasef is called. When this happens, getf() will return a stale reference, any operations in the kernel on that file descriptor will fail (as it is closed) and the operations meant for that fd will never occur from userspace's perspective. We correct this by detecting this condition in getf(), doing a putf on the old file handle, updating the file descriptor and proceeding as if everything was fine. When the areleasef() is done, it will harmlessly decrement the reference counter on the Illumos file handle. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #492
* Prevent rm modules.* when make installtuxoko2015-12-021-1/+1
| | | | | | | | | | | This was originally in e80cd06b8e0428f3ca2c62e4cb0e4ec54fda1d5c, but somehow was changed and not working anymore. And it will cause the following error: modprobe: ERROR: ../libkmod/libkmod.c:506 lookup_builtin_file() could not open builtin file '/lib/modules/4.2.0-18-generic/modules.builtin.bin' Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #501
* spl-kmem-cache: include linux/prefetch.h for prefetchw()Dimitri John Ledkov2015-12-021-0/+1
| | | | | | | | This is needed for architectures that do not have a builtin prefetchw() Signed-off-by: Dimitri John Ledkov <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #502
* Limit maximum object size in kmem testsBrian Behlendorf2015-11-161-1/+4
| | | | | | | | | | Limit the maximum object size to 1/128 of total system memory for the kmem cache tests. Large values can result in out of memory errors for systems with less the 512M of memory. Additionally, use the known number of objects per-slab for calculating the number of objects to use for a test. Signed-off-by: Brian Behlendorf <[email protected]>
* Remove superfluous `newline` characterloli10K2015-11-131-1/+1
| | | | | | | | | Remove superfluous `newline` character from spl_kmem_cache_magazine_size module parameter description. Signed-off-by: loli10K <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #499
* Fix taskq dynamic spawningtuxoko2015-11-131-14/+11
| | | | | | | | | | | | | | | | | | | | Currently taskq_dispatch() will spawn new task with a condition that the caller is also a member of the taskq. However, under this condition, it will still cause deadlock where a task on tq1 is waiting another thread, who is trying to dispatch a task on tq1. So this patch removes the check. For example when you do: zfs send pp/fs0@001 | zfs recv pp/fs0_copy This will easily deadlock before this patch. Also, move the seq_task check from taskq_thread_spawn() to taskq_thread() because it's not used by the caller from taskq_dispatch(). Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #496
* Don't call kmem_cache_shrink from shrinkerChunwei Chen2015-11-111-6/+1
| | | | | | | | | | | | | Linux slab will automatically free empty slab when number of partial slab is over min_partial, so we don't need to explicitly shrink it. In fact, calling kmem_cache_shrink from shrinker will cause heavy contention on kmem_cache_node->list_lock, to the point that it might cause __slab_free to livelock (see zfsonlinux/zfs#3936) Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes zfsonlinux/zfs#3936 Closes #487
* Fix CPU hotplugBrian Behlendorf2015-10-131-8/+7
| | | | | | | | | | | | | | | | Allocate a kmem cache magazine for every possible CPU which might be added to the system. This ensures that when one of these CPUs is enabled it can be safely used immediately. For many systems the number of online CPUs is identical to the number of present CPUs so this does imply an increased memory footprint. In fact, dynamically allocating the array of magazine pointers instead of using the worst case NR_CPUS can end up decreasing our memory footprint. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Closes #482
* Fix PAX Patch/Grsec SLAB_USERCOPY panicBrian Behlendorf2015-09-281-1/+11
| | | | | | | | | | | | | | | Support grsecurity/PaX kernel configurations where CONFIG_PAX_USERCOPY_SLABS are enabled. When this kernel option is enabled slabs which are used to copy between user and kernel space must be created with SLAB_USERCOPY. Stock Linux kernels do not have a SLAB_USERCOPY definition so this causes no change in behavior for non-PAX-enabled kernels. Verified-by: Wuffleton <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #2977 Issue #3796
* Disable direct reclaim in taskq worker threads on Linux 3.9+Richard Yao2015-09-091-0/+4
| | | | | | | | | | | | | | | Illumos does not have direct reclaim and code run inside taskq worker threads is not designed to deal with it. Allowing direct reclaim inside a worker thread can therefore deadlock. We set PF_MEMALLOC_NOIO through memalloc_noio_save() to indicate to the kernel's reclaim code that we are inside a context where memory allocations cannot be allowed to block on filesystem activity. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#1274 Issue zfsonlinux/zfs#2390 Closes #474
* Linux 4.2 compat: misc_deregister()Brian Behlendorf2015-09-011-5/+1
| | | | | | | | | The misc_deregister() function was changed to a void return type. Rather than add compatibility code to detect this change simply ignore the return code on all kernels. It was only used to log an informational error message of no real value. Signed-off-by: Brian Behlendorf <[email protected]>
* Create a new thread during recursive taskq dispatch if necessaryTim Chase2015-09-011-3/+28
| | | | | | | | | | | | | When dynamic taskq is enabled and all threads for a taskq are occupied, a recursive dispatch can cause a deadlock if calling thread depends on the recursively-dispatched thread for its return condition. This patch attempts to create a new thread for recursive dispatch when none are available. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #472
* Revert "Create a new thread during recursive taskq dispatch if necessary"Brian Behlendorf2015-08-311-22/+0
| | | | | | | | | | | This reverts commit 076821e due to a locking issue uncovered in subsequent testing. An ASSERT is hit due to tq->tq_nspawn being updated outside the lock. The patch will need to be reworked. VERIFY3(0 == tq->tq_nspawn) failed (0 == -1) Signed-off-by: Brian Behlendorf <[email protected]> Issue #472
* Create a new thread during recursive taskq dispatch if necessaryTim Chase2015-08-311-0/+22
| | | | | | | | | | | | | When dynamic taskq is enabled and all threads for a taskq are occupied, a recursive dispatch can cause a deadlock if calling thread depends on the recursively-dispatched thread for its return condition. This patch attempts to create a new thread for recursive dispatch when none are available. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #472
* Linux 4.2 compat: vfs_rename()Brian Behlendorf2015-08-192-0/+7
| | | | | | | | | | | | | Attempting to perform a vfs_rename() on Linux 4.2 and newer kernels results in an EACCES error. Rather than attempting to add and maintain more ugly compatibility code it's best to just retire this interface. As a first step the SPLAT test is disabled for Linux 4.2 and newer kernels. vn_rename: Failed vn_rename /tmp/vn.tmp.1 -> /tmp/vn.tmp.2 (13) Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#3653
* Invert minclsyspri and maxclsyspriBrian Behlendorf2015-07-282-4/+3
| | | | | | | | | | | | | | | | | | | | | | | On Linux the meaning of a processes priority is inverted with respect to illumos. High values on Linux indicate a _low_ priority while high value on illumos indicate a _high_ priority. In order to preserve the logical meaning of the minclsyspri and maxclsyspri macros when they are used by the illumos wrapper functions their values have been inverted. This way when changes are merged from upstream illumos we won't need to remember to invert the macro. It could also lead to confusion. Note this change also reverts some of the priorities changes in prior commit 62aa81a. The rational is as follows: spl_kmem_cache - High priority may result in blocked memory allocs spl_system_taskq - May perform I/O for file backed VDEVs spl_dynamic_taskq - New taskq threads should be spawned promptly Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Issue zfsonlinux/zfs#3607
* Remove skc_ref from alloc/free pathsBrian Behlendorf2015-07-241-9/+2
| | | | | | | | | | | | | | | | As described in spl_kmem_cache_destroy() the ->skc_ref count was added to address the case of a cache reap or grow racing with a destroy. They are not strictly needed in the alloc/free paths because consumers of the cache are responsible for not using it while it's being destroyed. Removing this code is desirable because there is some evidence that contention on this atomic negative impacts performance on large-scale NUMA systems. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #463
* Add defclsyspri macroBrian Behlendorf2015-07-238-24/+31
| | | | | | | | | | | | | | | | | | | | | | | Add a new defclsyspri macro which can be used to request the default Linux scheduler priority. Neither the minclsyspri or maxclsyspri map to the default Linux kernel thread priority. This makes it awkward to create taskqs which run with the same priority as the rest of the kernel threads on the system which can lead to performance issues. All SPL callers which previously used minclsyspri or maxclsyspri have been changed to use defclsyspri. The vast majority of callers were part of the test suite which won't have an external impact. The few places where it could impact performance the change was from maxclsyspri to defclsyspri. This makes it more likely the process will be scheduled which may help performance. To facilitate further performance analysis the spl_taskq_thread_priority module option has been added. When disabled (0) all newly created kernel threads will use the default kernel thread priority. When enabled (1) the specified taskq priority will be used. By default this value is enabled (1). Signed-off-by: Brian Behlendorf <[email protected]>
* Default to --disable-debug-kmemBrian Behlendorf2015-07-211-18/+3
| | | | | | | | | | | | | | | | | | The default kmem debugging (--enable-debug-kmem) can severely impact performance on large-scale NUMA systems due to the atomic operations used in the memory accounting. A 32-thread fio test running on a 40-core 80-thread system and performing 100% cached reads with kmem debugging is: Enabled: READ: io=177071MB, aggrb=2951.2MB/s, minb=2951.2MB/s, maxb=2951.2MB/s, Disabled: READ: io=271454MB, aggrb=4524.4MB/s, minb=4524.4MB/s, maxb=4524.4MB/s, Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issues #463
* Support parallel build trees (VPATH builds)Turbo Fredriksson2015-07-172-36/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | Build products from an out of tree build should be written relative to the build directory. Sources should be referred to by their locations in the source directory. This is accomplished by adding the 'src' and 'obj' variables for the module Makefile.am, using relative paths to reference source files, and by setting VPATH when source files are not co-located with the Makefile. This enables the following: $ mkdir build $ cd build $ ../configure $ make -s This change also has the advantage of resolving the following warning which is generated by modern versions of automake. Makefile.am:00: warning: source file 'xxx' is in a subdirectory, Makefile.am:00: but option 'subdir-objects' is disabled Signed-off-by: Turbo Fredriksson <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#1082
* Set TASKQ_DYNAMIC for kmem and system taskqsBrian Behlendorf2015-06-242-5/+5
| | | | | | | | | | | | | | | | | | | Add the TASKQ_DYNAMIC flag to the kmem_cache and system taskqs to reduce the number of idle threads on the system. Additional threads will be created on demand up to the previous maximum thread counts. This should have minimal, if any, impact on performance. This makes the system taskq consistent with illumos which is always created as a dynamic taskq with up to 64 threads. The task limits for the kmem_cache have been increased to avoid any unnessisary throttling and to keep a larger reserve of task_t structures on the free list. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #458
* Add TASKQ_DYNAMIC featureBrian Behlendorf2015-06-242-123/+332
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting the TASKQ_DYNAMIC flag will create a taskq with dynamic semantics. Initially only a single worker thread will be created to service tasks dispatched to the queue. As additional threads are needed they will be dynamically spawned up to the max number specified by 'nthreads'. When the threads are no longer needed, because the taskq is empty, they will automatically terminate. Due to the low cost of creating and destroying threads under Linux by default new threads and spawned and terminated aggressively. There are two modules options which can be tuned to adjust this behavior if needed. * spl_taskq_thread_sequential - The number of sequential tasks, without interruption, which needed to be handled by a worker thread before a new worker thread is spawned. Default 4. * spl_taskq_thread_dynamic - Provides the ability to completely disable the use of dynamic taskqs on the system. This is provided for the purposes of debugging and troubleshooting. Default 1 (enabled). This behavior is fundamentally consistent with the dynamic taskq implementation found in both illumos and FreeBSD. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #458
* Rename cv_wait_interruptible() to cv_wait_sig()Brian Behlendorf2015-06-101-34/+47
| | | | | | | | | | | | | | | | | Commit f752b46e added the cv_wait_interruptible() function to allow condition variables to be woken by signals. This function and its timed wait counterpart should have been named cv_wait_sig() to match the illumos interface which provides the same functionality. This patch renames the symbol but leaves a #define compatibility wrapper in place until the ZFS code can be moved to the correct name. This patch also makes a small number of cosmetic changes to make the condvar source and header cstyle clean. Signed-off-by: Brian Behlendorf <[email protected]> Closes #456
* Make taskq_wait() block until the queue is emptyChris Dunlop2015-06-092-48/+65
| | | | | | | | | | | | | | | | | | | | | | Under Illumos taskq_wait() returns when there are no more tasks in the queue. This behavior differs from ZoL and FreeBSD where taskq_wait() returns when all the tasks in the queue at the beginning of the taskq_wait() call are complete. New tasks added whilst taskq_wait() is running will be ignored. This difference in semantics makes it possible that new subtle issues could be introduced when porting changes from Illumos. To avoid that possibility the taskq_wait() function is being updated such that it blocks until the queue in empty. The previous behavior remains available through the taskq_wait_outstanding() interface. Note that this function was previously called taskq_wait_all() but has been renamed to avoid confusion. Signed-off-by: Chris Dunlop <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #455
* Fix cstyle issues in spl-tsd.cBrian Behlendorf2015-04-241-20/+21
| | | | | | | This patch only addresses the issues identified by the style checker in spl-tsd.c. It contains no functional changes. Signed-off-by: Brian Behlendorf <[email protected]>
* Make tsd_set(key, NULL) remove the tsd entry for current threadChunwei Chen2015-04-241-0/+63
| | | | | | | | | | To prevent leaking tsd entries, we make tsd_set(key, NULL) remove the tsd entry for the current thread. This is alright since tsd_get() returns NULL when the entry doesn't exist. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #443
* Implement areleasef()Richard Yao2015-04-241-5/+14
| | | | | | Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #449
* vn_getf/vn_releasef should not accept negative file descriptorsRichard Yao2015-04-241-0/+6
| | | | | | | | | | | | | | | | | | | C type coercion rules require that negative numbers be converted into positive numbers via wraparound such that a negative -1 becomes a positive 1. This causes vn_getf to return a file handle when it should return NULL whenever a positive file descriptor existed with the same value. We should check for a negative file descriptor and return NULL instead. This was caught by ClusterHQ's unit testing. Reference: http://stackoverflow.com/questions/50605/signed-to-unsigned-conversion-in-c-is-it-always-safe Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Andriy Gapon <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #450
* Clear PF_FSTRANS over vfs_sync()Brian Behlendorf2015-04-071-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When layered on XFS the following warning will be emitted under CentOS7 when entering vfs_fsync() with PF_FSTRANS already set. This is not an issue for other stock Linux file systems and the warning was removed for newer kernels. However, to avoid triggering this error PF_FSTRANS is cleared and then reset in vn_fsync(). WARNING: at fs/xfs/xfs_aops.c:968 xfs_vm_writepage+0x5ab/0x5c0 Call Trace: [<ffffffff8105dee1>] warn_slowpath_common+0x61/0x80 [<ffffffffa01706fb>] xfs_vm_writepage+0x5ab/0x5c0 [xfs] [<ffffffff8114b833>] __writepage+0x13/0x50 [<ffffffff8114c341>] write_cache_pages+0x251/0x4d0 [<ffffffff8114c60d>] generic_writepages+0x4d/0x80 [<ffffffffa016fc93>] xfs_vm_writepages+0x43/0x50 [xfs] [<ffffffff8114d68e>] do_writepages+0x1e/0x40 [<ffffffff81142bd5>] __filemap_fdatawrite_range+0x65/0x80 [<ffffffff81142cea>] filemap_write_and_wait_range+0x2a/0x70 [<ffffffffa017a5b6>] xfs_file_fsync+0x66/0x1f0 [xfs] [<ffffffff811df54b>] vfs_fsync+0x2b/0x40 [<ffffffffa03a88bd>] vn_fsync+0x2d/0x90 [spl] [<ffffffffa0520c33>] spa_config_sync+0x503/0x680 [zfs] [<ffffffffa0520ee4>] spa_config_update+0x134/0x170 [zfs] [<ffffffffa0520eba>] spa_config_update+0x10a/0x170 [zfs] [<ffffffffa051c54f>] spa_import+0x5bf/0x7b0 [zfs] [<ffffffffa055c754>] zfs_ioc_pool_import+0x104/0x150 [zfs] [<ffffffffa056294f>] zfsdev_ioctl+0x4cf/0x5c0 [zfs] [<ffffffffa0562480>] ? pool_status_check+0xf0/0xf0 [zfs] [<ffffffff811c2c85>] do_vfs_ioctl+0x2e5/0x4c0 [<ffffffff811c2f01>] SyS_ioctl+0xa1/0xc0 [<ffffffff815f3219>] system_call_fastpath+0x16/0x1b Signed-off-by: Brian Behlendorf <[email protected]>
* Don't allow shrinking a PF_FSTRANS contextTim Chase2015-04-031-0/+6
| | | | | | | | | | | | | | Avoid deadlocks when entering the shrinker from a PF_FSTRANS context. This patch also reverts commit d0d5dd7 which added MUTEX_FSTRANS. Its use has been deprecated within ZFS as it was an ineffective mechanism to eliminate deadlocks. Among other things, it introduced the need for strict ordering of mutex locking and unlocking in order that the PF_FSTRANS flag wouldn't set incorrectly. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #446
* Reduce splat_taskq_test2_impl() stack frame sizeBrian Behlendorf2015-03-031-20/+31
| | | | | | | | | | | | | | Slightly increasing the size of a kmutex_t has caused us to exceed the stack frame warning size in splat_taskq_test2_impl(). To address this the tq_args have been moved to the heap. cc1: warnings being treated as errors spl-0.6.3/module/splat/splat-taskq.c:358: error: the frame size of 1040 bytes is larger than 1024 bytes Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #435
* Add MUTEX_FSTRANS mutex typeBrian Behlendorf2015-03-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | There are regions in the ZFS code where it is desirable to be able to be set PF_FSTRANS while a specific mutex is held. The ZFS code could be updated to set/clear this flag in all the correct places, but this is undesirable for a few reasons. 1) It would require changes to a significant amount of the ZFS code. This would complicate applying patches from upstream. 2) It would be easy to accidentally miss a critical region in the initial patch or to have an future change introduce a new one. Both of these concerns can be addressed by adding a new mutex type which is responsible for managing PF_FSTRANS, support for which was added to the SPL in commit 9099312 - Merge branch 'kmem-rework'. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #435
* Retire spl_module_init()/spl_module_fini()Brian Behlendorf2015-02-272-35/+8
| | | | | | | | | | | | | | In the original implementation of the SPL wrappers were provided for module initialization and cleanup. This was done to abstract away any compatibility code which might be needed for the SPL. As it turned out the only significant compatibility issue was that the default pwd during module load differed under Illumos and Linux. Since this is such as minor thing and the wrappers complicate the code they are being retired. Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#2985
* Fix spl_hostid module parameterChunwei Chen2015-02-041-1/+3
| | | | | | | | | | | Currently, spl_hostid module parameter doesn't do anything, because it will always be overwritten when calling into hostid_read(). Instead, we should only call into hostid_read() when spl_hostid is not zero, just as the comment describes. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #427
* Optimize vmem_alloc() retry pathBrian Behlendorf2015-02-021-1/+11
| | | | | | | | | | | | | | | | | For performance reasons the reworked kmem code maps vmem_alloc() to kmalloc_node() for allocations less than spa_kmem_alloc_max. This allows for more concurrency in the system and less contention of the virtual address space. Generally, this is a good thing. However, in the case when the kmalloc_node() fails it makes little sense to retry it using kmalloc_node() again. It will likely fail in exactly the same way. A smarter strategy is to abandon this optimization and retry using spl_vmalloc() which is very likely to succeed. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Closes #428
* Fix GFP_KERNEL allocations flagsBrian Behlendorf2015-01-213-4/+4
| | | | | | | | | | The kmem_vasprintf(), kmem_vsprintf(), kobj_open_file(), and vn_openat() functions should all use the kmem_flags_convert() function to generate the GFP_* flags. This ensures that they can be safely called in any context and the correct flags will be used. Signed-off-by: Brian Behlendorf <[email protected]> Closes #426
* Use __get_free_pages() for emergency objectsBrian Behlendorf2015-01-161-10/+12
| | | | | | | | | | | The __get_free_pages() function must be used in place of kmalloc() to ensure the __GFP_COMP is strictly honored. This is due to kmalloc() being layered on the generic Linux slab caches. It wasn't until recently that all caches were created using __GFP_COMP. This means that it is possible for a kmalloc() which passed the __GFP_COMP flag to be returned a non-compound allocation. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix kmem cache deadlock logicBrian Behlendorf2015-01-161-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kmem cache implementation always adds new slabs by dispatching a task to the spl_kmem_cache taskq to perform the allocation. This is done because large slabs must be allocated using vmalloc(). It is possible these allocations will block on IO because the GFP_NOIO flag is not honored. This can result in a deadlock. Therefore, a deadlock detection strategy was implemented to deal with this case. When it is determined, by timeout, that the spl_kmem_cache thread has deadlocked attempting to add a new slab. Then all callers attempting to allocate from the cache fall back to using kmalloc() which does honor all passed flags. This logic was correct but an optimization in the code allowed for a deadlock. Because only slabs backed by vmalloc() can deadlock in the way described above. An optimization was made to only invoke this deadlock detection code for vmalloc() backed caches. This had the advantage of making it easy to distinguish these objects when they were freed. But this isn't strictly safe. If all the spl_kmem_cache threads end up deadlocked than we can't grow any of the other caches either. This can once again result in a deadlock if memory needs to be allocated from one of these other caches to ensure forward progress. The fix here is to remove the optimization which limits this fall back allocation stratagy to vmalloc() backed caches. Doing this means we may need to take the cache lock in spl_kmem_cache_free() call path. But this small cost can be mitigated by ignoring objects with virtual addresses. For good measure the default number of spl_kmem_cache threads has been increased from 1 to 4, and made tunable. This alone wouldn't resolve the original issue since it's still possible for all the threads to be deadlocked. However, it does help responsiveness by ensuring that a single deadlocked spl_kmem_cache thread doesn't block allocations from other caches until the timeout is reached. Signed-off-by: Brian Behlendorf <[email protected]>
* Refine slab cache sizingBrian Behlendorf2015-01-162-140/+229
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change is designed to improve the memory utilization of slabs by more carefully setting their size. The way the code currently works is problematic for slabs which contain large objects (>1MB). This is due to slabs being unconditionally rounded up to a power of two which may result in unused space at the end of the slab. The reason the existing code rounds up every slab is because it assumes it will backed by the buddy allocator. Since the buddy allocator can only performs power of two allocations this is desirable because it avoids wasting any space. However, this logic breaks down if slab is backed by vmalloc() which operates at a page level granularity. In this case, the optimal thing to do is calculate the minimum required slab size given certain constraints (object size, alignment, objects/slab, etc). Therefore, this patch reworks the spl_slab_size() function so that it sizes KMC_KMEM slabs differently than KMC_VMEM slabs. KMC_KMEM slabs are rounded up to the nearest power of two, and KMC_VMEM slabs are allowed to be the minimum required size. This change also reduces the default number of objects per slab. This reduces how much memory a single cache object can pin, which can result in significant memory saving for highly fragmented caches. But depending on the workload it may result in slabs being allocated and freed more frequently. In practice, this has been shown to be a better default for most workloads. Also the maximum slab size has been reduced to 4MB on 32-bit systems. Due to the limited virtual address space it's critical the we be as frugal as possible. A limit of 4M still lets us reasonably comfortably allocate a limited number of 1MB objects. Finally, the kmem:slab_small and kmem:slab_large SPLAT tests were extended to provide better test coverage of various object sizes and alignments. Caches are created with random parameters and their basic functionality is verified by allocating several slabs worth of objects. Signed-off-by: Brian Behlendorf <[email protected]>
* Reduce kmem cache deadlock thresholdBrian Behlendorf2015-01-161-1/+1
| | | | | | | | | Reduce the threshold for detecting a kmem cache deadlock by 10x from HZ to HZ/10. The reduced value is still several orders of magnitude large enough to avoid being triggered incorrectly. By reducing it we allow the system to resolve the issue more quickly. Signed-off-by: Brian Behlendorf <[email protected]>
* Make slab reclaim more aggressiveBrian Behlendorf2015-01-161-29/+47
| | | | | | | | | | Many people have noticed that the kmem cache implementation is slow to release its memory. This patch makes the reclaim behavior more aggressive by immediately freeing a slab once it is empty. Unused objects which are cached in the magazines will still prevent a slab from being freed. Signed-off-by: Brian Behlendorf <[email protected]>
* Enforce architecture-specific barriers around clear_bit()Richard Yao2015-01-161-3/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The comment above the Linux 3.16 kernel's clear_bit() states: /** * clear_bit - Clears a bit in memory * @nr: Bit to clear * @addr: Address to start counting from * * clear_bit() is atomic and may not be reordered. However, it does * not contain a memory barrier, so if it is used for locking purposes, * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic() * in order to ensure changes are visible on other processors. */ This comment does not make sense in the context of x86 because x86 maps the operations to barrier(), which is a compiler barrier. However, it does make sense to me when I consider architectures that reorder around atomic instructions. In such situations, a processor is allowed to execute the wake_up_bit() before clear_bit() and we have a race. There are a few architectures that suffer from this issue. In such situations, the other processor would wake-up, see the bit is still taken and go to sleep, while the one responsible for waking it up will assume that it did its job and continue. This patch implements a wrapper that maps smp_mb__{before,after}_atomic() to smp_mb__{before,after}_clear_bit() on older kernels and changes our code to leverage it in a manner consistent with the mainline kernel. Signed-off-by: Brian Behlendorf <[email protected]>
* Add hooks for disabling direct reclaimRichard Yao2015-01-163-2/+29
| | | | | | | | | | | | | | | | | | | | | The port of XFS to Linux introduced a thread-specific PF_FSTRANS bit that is used to mark contexts which are processing transactions. When set, allocations in this context can dip into kernel memory reserves to avoid deadlocks during writeback. Linux 3.9 provided the additional PF_MEMALLOC_NOIO for disabling __GFP_IO in page allocations, which XFS began using in 3.15. This patch implements hooks for marking transactions via PF_FSTRANS. When an allocation is performed in the context of PF_FSTRANS, any KM_SLEEP allocation is transparently converted to a GFP_NOIO allocation. Additionally, when using a Linux 3.9 or newer kernel, it will set PF_MEMALLOC_NOIO to prevent direct reclaim from entering pageout() on on any KM_PUSHPAGE or KM_NOSLEEP allocation. This effectively allows the spl_vmalloc() helper function to be used safely in a thread which is responsible for IO. Signed-off-by: Brian Behlendorf <[email protected]>
* Refactor generic memory allocation interfacesBrian Behlendorf2015-01-166-516/+334
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch achieves the following goals: 1. It replaces the preprocessor kmem flag to gfp flag mapping with proper translation logic. This eliminates the potential for surprises that were previously possible where kmem flags were mapped to gfp flags. 2. It maps vmem_alloc() allocations to kmem_alloc() for allocations sized less than or equal to the newly-added spl_kmem_alloc_max parameter. This ensures that small allocations will not contend on a single global lock, large allocations can still be handled, and potentially limited virtual address space will not be squandered. This behavior is entirely different than under Illumos due to different memory management strategies employed by the respective kernels. However, this functionally provides the semantics required. 3. The --disable-debug-kmem, --enable-debug-kmem (default), and --enable-debug-kmem-tracking allocators have been unified in to a single spl_kmem_alloc_impl() allocation function. This was done to simplify the code and make it more maintainable. 4. Improve portability by exposing an implementation of the memory allocations functions that can be safely used in the same way they are used on Illumos. Specifically, callers may safely use KM_SLEEP in contexts which perform filesystem IO. This allows us to eliminate an entire class of Linux specific changes which were previously required to avoid deadlocking the system. This change will be largely transparent to existing callers but there are a few caveats: 1. Because the headers were refactored and extraneous includes removed callers may find they need to explicitly add additional #includes. In particular, kmem_cache.h must now be explicitly includes to access the SPL's kmem cache implementation. This behavior is different from Illumos but it was done to avoid always masking the Linux slab functions when kmem.h is included. 2. Callers, like Lustre, which made assumptions about the definitions of KM_SLEEP, KM_NOSLEEP, and KM_PUSHPAGE will need to be updated. Other callers such as ZFS which did not will not require changes. 3. KM_PUSHPAGE is no longer overloaded to imply GFP_NOIO. It retains its original meaning of allowing allocations to access reserved memory. KM_PUSHPAGE callers can be converted back to KM_SLEEP. 4. The KM_NODEBUG flags has been retired and the default warning threshold increased to 32k. 5. The kmem_virt() functions has been removed. For callers which need to distinguish between a physical and virtual address use is_vmalloc_addr(). Signed-off-by: Brian Behlendorf <[email protected]>
* Fix kmem cstyle issuesBrian Behlendorf2015-01-163-167/+194
| | | | | | | | Address all cstyle issues in the kmem, vmem, and kmem_cache source and headers. This will done to make it easier to review subsequent changes which will rework the kmem/vmem implementation. Signed-off-by: Brian Behlendorf <[email protected]>
* Refactor existing codeBrian Behlendorf2015-01-1616-1794/+2066
| | | | | | | | | | | | | | | | | This change introduces no functional changes to the memory management interfaces. It only restructures the existing codes by separating the kmem, vmem, and kmem cache implementations in the separate source and header files. Splitting this functionality in to separate files required the addition of spl_vmem_{init,fini}() and spl_kmem_cache_{initi,fini}() functions. Additionally, several minor changes to the #include's were required to accommodate the removal of extraneous header from kmem.h. But again, while large this patch introduces no functional changes. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix debug object on stack warningBrian Behlendorf2015-01-071-34/+57
| | | | | | | | | | | | | | | | When running the SPLAT tests on a kernel with CONFIG_DEBUG_OBJECTS=y enabled the following warning is generated. ODEBUG: object is on stack, but not annotated WARNING: at lib/debugobjects.c:300 __debug_object_init+0x221/0x480() This is caused by the test cases placing a debug object on the stack rather than the heap. This isn't harmful since they are small objects but to make CONFIG_DEBUG_OBJECTS=y happy the objects have been relocated to the heap. This impacted taskq tests 1, 3, and 7. Signed-off-by: Brian Behlendorf <[email protected]> Closes #424
* Remove compat includes from sys/types.hNed Bass2014-11-1913-10/+27
| | | | | | | | | | | | | | | | | | Don't include the compatibility code in linux/*_compat.h in the public header sys/types.h. This causes problems when an external code base includes the ZFS headers and has its own conflicting compatibility code. Lustre, in particular, defined SHRINK_STOP for compatibility with pre-3.12 kernels in a way that conflicted with the SPL's definition. Because Lustre ZFS OSD includes ZFS headers it fails to build due to a '"SHRINK_STOP" redefined' compiler warning. To avoid such conflicts only include the compat headers from .c files or private headers. Also, for consistency, include sys/*.h before linux/*.h then sort by header name. Signed-off-by: Ned Bass <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #411
* Retire legacy debugging infrastructureBrian Behlendorf2014-11-1917-2203/+328
| | | | | | | | | | | | | | | | | | When the SPL was originally written Linux tracepoints were still in their infancy. Therefore, an entire debugging subsystem was added to facilite tracing which served us well for many years. Now that Linux tracepoints have matured they provide all the functionality of the previous tracing subsystem. Rather than maintain parallel functionality it makes sense to fully adopt tracepoints. Therefore, this patch retires the legacy debugging infrastructure. See zfsonlinux/zfs@bc9f413 for the tracepoint changes. Signed-off-by: Ned Bass <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #408