summaryrefslogtreecommitdiffstats
path: root/module/splat
Commit message (Collapse)AuthorAgeFilesLines
* Don't hold mutex until release cv in cv_waitChunwei Chen2016-01-121-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | If a thread is holding mutex when doing cv_destroy, it might end up waiting a thread in cv_wait. The waiter would wake up trying to aquire the same mutex and cause deadlock. We solve this by move the mutex_enter to the bottom of cv_wait, so that the waiter will release the cv first, allowing cv_destroy to succeed and have a chance to free the mutex. This would create race condition on the cv_mutex. We use xchg to set and check it to ensure we won't be harmed by the race. This would result in the cv_mutex debugging becomes best-effort. Also, the change reveals a race, which was unlikely before, where we call mutex_destroy while test threads are still holding the mutex. We use kthread_stop to make sure the threads are exit before mutex_destroy. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue zfsonlinux/zfs#4166 Issue zfsonlinux/zfs#4106
* Fix do_div() types in condvar:timeoutBrian Behlendorf2015-12-221-5/+6
| | | | | | | | The do_div() macro expects unsigned types and this is detected in powerpc implementation of do_div(). Signed-off-by: Brian Behlendorf <[email protected]> Closes #516
* Limit maximum object size in kmem testsBrian Behlendorf2015-11-161-1/+4
| | | | | | | | | | Limit the maximum object size to 1/128 of total system memory for the kmem cache tests. Large values can result in out of memory errors for systems with less the 512M of memory. Additionally, use the known number of objects per-slab for calculating the number of objects to use for a test. Signed-off-by: Brian Behlendorf <[email protected]>
* Linux 4.2 compat: misc_deregister()Brian Behlendorf2015-09-011-5/+1
| | | | | | | | | The misc_deregister() function was changed to a void return type. Rather than add compatibility code to detect this change simply ignore the return code on all kernels. It was only used to log an informational error message of no real value. Signed-off-by: Brian Behlendorf <[email protected]>
* Linux 4.2 compat: vfs_rename()Brian Behlendorf2015-08-192-0/+7
| | | | | | | | | | | | | Attempting to perform a vfs_rename() on Linux 4.2 and newer kernels results in an EACCES error. Rather than attempting to add and maintain more ugly compatibility code it's best to just retire this interface. As a first step the SPLAT test is disabled for Linux 4.2 and newer kernels. vn_rename: Failed vn_rename /tmp/vn.tmp.1 -> /tmp/vn.tmp.2 (13) Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#3653
* Add defclsyspri macroBrian Behlendorf2015-07-236-20/+20
| | | | | | | | | | | | | | | | | | | | | | | Add a new defclsyspri macro which can be used to request the default Linux scheduler priority. Neither the minclsyspri or maxclsyspri map to the default Linux kernel thread priority. This makes it awkward to create taskqs which run with the same priority as the rest of the kernel threads on the system which can lead to performance issues. All SPL callers which previously used minclsyspri or maxclsyspri have been changed to use defclsyspri. The vast majority of callers were part of the test suite which won't have an external impact. The few places where it could impact performance the change was from maxclsyspri to defclsyspri. This makes it more likely the process will be scheduled which may help performance. To facilitate further performance analysis the spl_taskq_thread_priority module option has been added. When disabled (0) all newly created kernel threads will use the default kernel thread priority. When enabled (1) the specified taskq priority will be used. By default this value is enabled (1). Signed-off-by: Brian Behlendorf <[email protected]>
* Support parallel build trees (VPATH builds)Turbo Fredriksson2015-07-171-17/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | Build products from an out of tree build should be written relative to the build directory. Sources should be referred to by their locations in the source directory. This is accomplished by adding the 'src' and 'obj' variables for the module Makefile.am, using relative paths to reference source files, and by setting VPATH when source files are not co-located with the Makefile. This enables the following: $ mkdir build $ cd build $ ../configure $ make -s This change also has the advantage of resolving the following warning which is generated by modern versions of automake. Makefile.am:00: warning: source file 'xxx' is in a subdirectory, Makefile.am:00: but option 'subdir-objects' is disabled Signed-off-by: Turbo Fredriksson <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#1082
* Add TASKQ_DYNAMIC featureBrian Behlendorf2015-06-241-57/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting the TASKQ_DYNAMIC flag will create a taskq with dynamic semantics. Initially only a single worker thread will be created to service tasks dispatched to the queue. As additional threads are needed they will be dynamically spawned up to the max number specified by 'nthreads'. When the threads are no longer needed, because the taskq is empty, they will automatically terminate. Due to the low cost of creating and destroying threads under Linux by default new threads and spawned and terminated aggressively. There are two modules options which can be tuned to adjust this behavior if needed. * spl_taskq_thread_sequential - The number of sequential tasks, without interruption, which needed to be handled by a worker thread before a new worker thread is spawned. Default 4. * spl_taskq_thread_dynamic - Provides the ability to completely disable the use of dynamic taskqs on the system. This is provided for the purposes of debugging and troubleshooting. Default 1 (enabled). This behavior is fundamentally consistent with the dynamic taskq implementation found in both illumos and FreeBSD. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #458
* Make taskq_wait() block until the queue is emptyChris Dunlop2015-06-091-10/+10
| | | | | | | | | | | | | | | | | | | | | | Under Illumos taskq_wait() returns when there are no more tasks in the queue. This behavior differs from ZoL and FreeBSD where taskq_wait() returns when all the tasks in the queue at the beginning of the taskq_wait() call are complete. New tasks added whilst taskq_wait() is running will be ignored. This difference in semantics makes it possible that new subtle issues could be introduced when porting changes from Illumos. To avoid that possibility the taskq_wait() function is being updated such that it blocks until the queue in empty. The previous behavior remains available through the taskq_wait_outstanding() interface. Note that this function was previously called taskq_wait_all() but has been renamed to avoid confusion. Signed-off-by: Chris Dunlop <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #455
* Reduce splat_taskq_test2_impl() stack frame sizeBrian Behlendorf2015-03-031-20/+31
| | | | | | | | | | | | | | Slightly increasing the size of a kmutex_t has caused us to exceed the stack frame warning size in splat_taskq_test2_impl(). To address this the tq_args have been moved to the heap. cc1: warnings being treated as errors spl-0.6.3/module/splat/splat-taskq.c:358: error: the frame size of 1040 bytes is larger than 1024 bytes Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #435
* Add MUTEX_FSTRANS mutex typeBrian Behlendorf2015-03-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | There are regions in the ZFS code where it is desirable to be able to be set PF_FSTRANS while a specific mutex is held. The ZFS code could be updated to set/clear this flag in all the correct places, but this is undesirable for a few reasons. 1) It would require changes to a significant amount of the ZFS code. This would complicate applying patches from upstream. 2) It would be easy to accidentally miss a critical region in the initial patch or to have an future change introduce a new one. Both of these concerns can be addressed by adding a new mutex type which is responsible for managing PF_FSTRANS, support for which was added to the SPL in commit 9099312 - Merge branch 'kmem-rework'. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #435
* Retire spl_module_init()/spl_module_fini()Brian Behlendorf2015-02-271-7/+4
| | | | | | | | | | | | | | In the original implementation of the SPL wrappers were provided for module initialization and cleanup. This was done to abstract away any compatibility code which might be needed for the SPL. As it turned out the only significant compatibility issue was that the default pwd during module load differed under Illumos and Linux. Since this is such as minor thing and the wrappers complicate the code they are being retired. Signed-off-by: Brian Behlendorf <[email protected]> Issue zfsonlinux/zfs#2985
* Refine slab cache sizingBrian Behlendorf2015-01-161-105/+172
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change is designed to improve the memory utilization of slabs by more carefully setting their size. The way the code currently works is problematic for slabs which contain large objects (>1MB). This is due to slabs being unconditionally rounded up to a power of two which may result in unused space at the end of the slab. The reason the existing code rounds up every slab is because it assumes it will backed by the buddy allocator. Since the buddy allocator can only performs power of two allocations this is desirable because it avoids wasting any space. However, this logic breaks down if slab is backed by vmalloc() which operates at a page level granularity. In this case, the optimal thing to do is calculate the minimum required slab size given certain constraints (object size, alignment, objects/slab, etc). Therefore, this patch reworks the spl_slab_size() function so that it sizes KMC_KMEM slabs differently than KMC_VMEM slabs. KMC_KMEM slabs are rounded up to the nearest power of two, and KMC_VMEM slabs are allowed to be the minimum required size. This change also reduces the default number of objects per slab. This reduces how much memory a single cache object can pin, which can result in significant memory saving for highly fragmented caches. But depending on the workload it may result in slabs being allocated and freed more frequently. In practice, this has been shown to be a better default for most workloads. Also the maximum slab size has been reduced to 4MB on 32-bit systems. Due to the limited virtual address space it's critical the we be as frugal as possible. A limit of 4M still lets us reasonably comfortably allocate a limited number of 1MB objects. Finally, the kmem:slab_small and kmem:slab_large SPLAT tests were extended to provide better test coverage of various object sizes and alignments. Caches are created with random parameters and their basic functionality is verified by allocating several slabs worth of objects. Signed-off-by: Brian Behlendorf <[email protected]>
* Refactor generic memory allocation interfacesBrian Behlendorf2015-01-161-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch achieves the following goals: 1. It replaces the preprocessor kmem flag to gfp flag mapping with proper translation logic. This eliminates the potential for surprises that were previously possible where kmem flags were mapped to gfp flags. 2. It maps vmem_alloc() allocations to kmem_alloc() for allocations sized less than or equal to the newly-added spl_kmem_alloc_max parameter. This ensures that small allocations will not contend on a single global lock, large allocations can still be handled, and potentially limited virtual address space will not be squandered. This behavior is entirely different than under Illumos due to different memory management strategies employed by the respective kernels. However, this functionally provides the semantics required. 3. The --disable-debug-kmem, --enable-debug-kmem (default), and --enable-debug-kmem-tracking allocators have been unified in to a single spl_kmem_alloc_impl() allocation function. This was done to simplify the code and make it more maintainable. 4. Improve portability by exposing an implementation of the memory allocations functions that can be safely used in the same way they are used on Illumos. Specifically, callers may safely use KM_SLEEP in contexts which perform filesystem IO. This allows us to eliminate an entire class of Linux specific changes which were previously required to avoid deadlocking the system. This change will be largely transparent to existing callers but there are a few caveats: 1. Because the headers were refactored and extraneous includes removed callers may find they need to explicitly add additional #includes. In particular, kmem_cache.h must now be explicitly includes to access the SPL's kmem cache implementation. This behavior is different from Illumos but it was done to avoid always masking the Linux slab functions when kmem.h is included. 2. Callers, like Lustre, which made assumptions about the definitions of KM_SLEEP, KM_NOSLEEP, and KM_PUSHPAGE will need to be updated. Other callers such as ZFS which did not will not require changes. 3. KM_PUSHPAGE is no longer overloaded to imply GFP_NOIO. It retains its original meaning of allowing allocations to access reserved memory. KM_PUSHPAGE callers can be converted back to KM_SLEEP. 4. The KM_NODEBUG flags has been retired and the default warning threshold increased to 32k. 5. The kmem_virt() functions has been removed. For callers which need to distinguish between a physical and virtual address use is_vmalloc_addr(). Signed-off-by: Brian Behlendorf <[email protected]>
* Refactor existing codeBrian Behlendorf2015-01-165-1/+9
| | | | | | | | | | | | | | | | | This change introduces no functional changes to the memory management interfaces. It only restructures the existing codes by separating the kmem, vmem, and kmem cache implementations in the separate source and header files. Splitting this functionality in to separate files required the addition of spl_vmem_{init,fini}() and spl_kmem_cache_{initi,fini}() functions. Additionally, several minor changes to the #include's were required to accommodate the removal of extraneous header from kmem.h. But again, while large this patch introduces no functional changes. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix debug object on stack warningBrian Behlendorf2015-01-071-34/+57
| | | | | | | | | | | | | | | | When running the SPLAT tests on a kernel with CONFIG_DEBUG_OBJECTS=y enabled the following warning is generated. ODEBUG: object is on stack, but not annotated WARNING: at lib/debugobjects.c:300 __debug_object_init+0x221/0x480() This is caused by the test cases placing a debug object on the stack rather than the heap. This isn't harmful since they are small objects but to make CONFIG_DEBUG_OBJECTS=y happy the objects have been relocated to the heap. This impacted taskq tests 1, 3, and 7. Signed-off-by: Brian Behlendorf <[email protected]> Closes #424
* Remove compat includes from sys/types.hNed Bass2014-11-199-10/+22
| | | | | | | | | | | | | | | | | | Don't include the compatibility code in linux/*_compat.h in the public header sys/types.h. This causes problems when an external code base includes the ZFS headers and has its own conflicting compatibility code. Lustre, in particular, defined SHRINK_STOP for compatibility with pre-3.12 kernels in a way that conflicted with the SPL's definition. Because Lustre ZFS OSD includes ZFS headers it fails to build due to a '"SHRINK_STOP" redefined' compiler warning. To avoid such conflicts only include the compat headers from .c files or private headers. Also, for consistency, include sys/*.h before linux/*.h then sort by header name. Signed-off-by: Ned Bass <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #411
* Retire legacy debugging infrastructureBrian Behlendorf2014-11-192-2/+1
| | | | | | | | | | | | | | | | | | When the SPL was originally written Linux tracepoints were still in their infancy. Therefore, an entire debugging subsystem was added to facilite tracing which served us well for many years. Now that Linux tracepoints have matured they provide all the functionality of the previous tracing subsystem. Rather than maintain parallel functionality it makes sense to fully adopt tracepoints. Therefore, this patch retires the legacy debugging infrastructure. See zfsonlinux/zfs@bc9f413 for the tracepoint changes. Signed-off-by: Ned Bass <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #408
* Lower minimum objects/slab thresholdBrian Behlendorf2014-11-051-0/+9
| | | | | | | | | As long as we can fit a minimum of one object/slab there's no reason to prevent the creation of the cache. This effectively pushes the maximum object size up to 32MB. The splat cache tests were extended accordingly to verify this functionality. Signed-off-by: Brian Behlendorf <[email protected]>
* Linux 3.12 compat: shrinker semanticsTim Chase2014-10-281-6/+16
| | | | | | | | | | | | | | | The new shrinker API as of Linux 3.12 modifies "struct shrinker" by replacing the @shrink callback with the pair of @count_objects and @scan_objects. It also requires the return value of @count_objects to return the number of objects actually freed whereas the previous @shrink callback returned the number of remaining freeable objects. This patch adds support for the new @scan_objects return value semantics and updates the splat shrinker test case appropriately. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #403
* Remove shrink_{i,d}node_cache() wrappersBrian Behlendorf2014-10-171-69/+10
| | | | | | | | | This is optional functionality which may or may not be useful to ZFS when using older kernels. It is never a hard requirement. Therefore this functionality is being removed from the SPL and a simpler slimmed down version will be added to ZFS. Signed-off-by: Brian Behlendorf <[email protected]>
* Remove global memory variablesBrian Behlendorf2014-10-171-3/+2
| | | | | | | | | | | | | | | | | | | | Platforms such as Illumos and FreeBSD have historically provided global variables which summerize the memory state of a system. Linux on the otherhand doesn't expose any of this information to kernel modules and uses entirely different mechanisms for memory management. In order to simplify the original ZFS port to Linux these global variables were emulated by the SPL for the benefit of ZFS. As ZoL has matured over the years it has moved steadily away from these interfaces and now no longer depends on them at all. Therefore, this patch completely removes the global variables availrmem, minfree, desfree, lotsfree, needfree, swapfs_minfree, and swapfs_reserve. This greatly simplifies the memory management code and eliminates a common area of confusion. Signed-off-by: Brian Behlendorf <[email protected]>
* Remove get_vmalloc_info() wrapperBrian Behlendorf2014-10-171-85/+0
| | | | | | | | | | | | | | | The get_vmalloc_info() function was used to back the vmem_size() function. This was always problematic and resulted in brittle code because the kernel never provided a clean interface for modules. However, it turns out that the only caller of this function in ZFS uses it to determine the total virtual address space size. This can be determined easily without get_vmalloc_info() so vmem_size() has been updated to take this approach which allows us to shed the get_vmalloc_info() dependency. Signed-off-by: Brian Behlendorf <[email protected]>
* Update code to use misc_register()/misc_deregister()Brian Behlendorf2014-10-172-70/+21
| | | | | | | | | | | | | | | | | | When the SPL was originally written it was designed to use the device_create() and device_destroy() functions. Unfortunately, these functions changed considerably over the years making them difficult to rely on. As it turns out a better choice would have been to use the misc_register()/misc_deregister() functions. This interface for registering character devices has remained stable, is simple, and provides everything we need. Therefore the code has been reworked to use this interface. The higher level ZFS code has always depended on these same interfaces so this is also as a step towards minimizing our kernel dependencies. Signed-off-by: Brian Behlendorf <[email protected]>
* Update SPLAT to use kmutex_t for portabilityBrian Behlendorf2014-10-173-21/+26
| | | | | | | For consistency throughout the code update the SPLAT infrastructure to use the wrapped mutex interfaces. Signed-off-by: Brian Behlendorf <[email protected]>
* Make license compatibility checks consistentBrian Behlendorf2014-10-171-2/+2
| | | | | | | Apply the license specified in the META file to ensure the compatibility checks are all performed consistently. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix bug in SPLAT taskq:frontBrian Behlendorf2014-10-031-1/+2
| | | | | | | | | | | While running SPLAT on a kernel with CONFIG_DEBUG_ATOMIC_SLEEP enabled the taskq:front was flagged as a test which might sleep which in an unsafe context. Specifically, the splat_vprint() function which internally takes a mutex was being called under a spin lock. Moving the log function outside the spin lock cleanly solves this issue. Signed-off-by: Brian Behlendorf <[email protected]>
* Add KMC_SLAB cache typeBrian Behlendorf2014-05-221-13/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For small objects the Linux slab allocator has several advantages over its counterpart in the SPL. These include: 1) It is more memory-efficient and packs objects more tightly. 2) It is continually tuned to maximize performance. Therefore it makes sense to layer the SPLs slab allocator on top of the Linux slab allocator. This allows us to leverage the advantages above while preserving the Illumos semantics we depend on. However, there are some things we need to be careful of: 1) The Linux slab allocator was never designed to work well with large objects. Because the SPL slab must still handle this use case a cut off limit was added to transition from Linux slab backed objects to kmem or vmem backed slabs. spl_kmem_cache_slab_limit - Objects less than or equal to this size in bytes will be backed by the Linux slab. By default this value is zero which disables the Linux slab functionality. Reasonable values for this cut off limit are in the range of 4096-16386 bytes. spl_kmem_cache_kmem_limit - Objects less than or equal to this size in bytes will be backed by a kmem slab. Objects over this size will be vmem backed instead. This value defaults to 1/8 a page, or 512 bytes on an x86_64 architecture. 2) Be aware that using the Linux slab may inadvertently introduce new deadlocks. Care has been taken previously to ensure that all allocations which occur in the write path use GFP_NOIO. However, there may be internal allocations performed in the Linux slab which do not honor these flags. If this is the case a deadlock may occur. The path forward is definitely to start relying on the Linux slab. But for that to happen we need to start building confidence that there aren't any unexpected surprises lurking for us. And ideally need to move completely away from using the SPLs slab for large memory allocations. This patch is a first step. NOTES: 1) The KMC_NOMAGAZINE flag was leveraged to support the Linux slab backed caches but it is not supported for kmem/vmem backed caches. 2) Regardless of the spl_kmem_cache_*_limit settings a cache may be explicitly set to a given type by passed the KMC_KMEM, KMC_VMEM, or KMC_SLAB flags during cache creation. 3) The constructors, destructors, and reclaim callbacks are all functional and will be called regardless of the cache type. 4) KMC_SLAB caches will not appear in /proc/spl/kmem/slab due to the issues involved in presenting correct object accounting. Instead they will appear in /proc/slabinfo under the same names. 5) Several kmem SPLAT tests needed to be fixed because they relied incorrectly on internal kmem slab accounting. With the updated test cases all the SPLAT tests pass as expected. 6) An autoconf test was added to ensure that the __GFP_COMP flag was correctly added to the default flags used when allocating a slab. This is required to ensure all pages in higher order slabs are properly refcounted, see ae16ed9. 7) When using the SLUB allocator there is no need to attempt to set the __GFP_COMP flag. This has been the default behavior for the SLUB since Linux 2.6.25. 8) When using the SLUB it may be desirable to set the slub_nomerge kernel parameter to prevent caches from being merged. Original-patch-by: DHE <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Prakash Surya <[email protected]> Signed-off-by: Tim Chase <[email protected]> Signed-off-by: DHE <[email protected]> Signed-off-by: Chunwei Chen <[email protected]> Closes #356
* Add ddi_time_after and friendsChunwei Chen2014-04-141-5/+5
| | | | | | | | | | | | | | When comparing times gotten from ddi_get_lbolt, we have to take account of wrap around of jiffies. Therefore, we cannot use 't1 < t2'. Instead we should use 't1 - t2 < 0'. This patch add ddi_time_after and friends to address this issue. They have strict type restriction, clock_t for vanilla and int64_t for 64 version, to prevent type conversion from screwing things. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #335
* Support post-3.13 kthread_create() semantics.Tim Chase2014-04-082-6/+6
| | | | | | | | | | | Provide spl_kthread_create() as a wrapper to the kernel's kthread_create() to provide pre-3.13 semantics. Re-try if the call is interrupted or if it would have returned -ENOMEM. Otherwise return NULL. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #339
* splat cred:groupmember: Fix false positivesBrian Behlendorf2014-04-081-23/+71
| | | | | | | | | | | | | | | | Due to certain assumptions made in the the cred:groupmember test it could result in false positives when run on specific distributions. This was solely a bug in the test case and not in the groupmember() function which the test case was validating. To prevent future false positives the test case has been rewritten to be both more rigerous and to make fewer assumptions about the system. Minor style cleanup was done to cr_groups_search() and groupmember() functions. Signed-off-by: Brian Behlendorf <[email protected]>
* splat kmem:slab_reclaim: Test cleanupBrian Behlendorf2014-04-081-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | By setting __GFP_NORETRY the kernel memory reclaim logic was allowed to abort early and dump a falled allocation stack to the console. Since this was done in a tight loop to fill memory it could result in a large number of stacks being dumped to the console. This in turn slowed down the test sufficiently so it exceeded the time limit and failed. To resolve this issue the __GFP_NORETRY flag is being removed. This is how it should have been called originally to ensure we're simulating the behavior of most callers which will use the GFP_KERNEL flag. In addition, the reclaim granularity of 1000 objects was far to coarse for this to be a realistic test. For kmem:slab_reclaim there might only be a few thousand objects total in the cache. Therefore, the SPLAT_KMEM_OBJ_RECLAIM constant for these tests was lowered. This will cause the reclaim callback to run more frequently which makes for a better test case. The frequency of the cache reaping in kmem:slab_reap was increased to accommodate the reduced number of objects released during the reclaim. These changes only impact the test cases and were done to remove false positives caused by the test case itself. Signed-off-by: Brian Behlendorf <[email protected]>
* Add module versioningBrian Behlendorf2013-12-061-0/+1
| | | | | | | | | | | | | | | | Use the standard Linux MODULE_VERSION macro to expose the installed spl and splat module versions. This will also automatically add a checksum of the .c files and headers in "srcversion". See: /sys/module/spl/version /sys/module/spl/srcversion /sys/module/splat/version /sys/module/splat/srcversion Signed-off-by: Brian Behlendorf <[email protected]> Closes zfsonlinux/zfs#1923 Signed-off-by: Brian Behlendorf <[email protected]>
* Replace current_kernel_time() with getnstimeofday()Richard Yao2013-10-091-5/+6
| | | | | | | | | | | current_kernel_time() is used by the SPLAT, but it is not meant for performance measurement. We modify the SPLAT to use getnstimeofday(), which is equivalent to the gethrestime() function on Solaris. Additionally, we update gethrestime() to invoke getnstimeofday(). Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #279
* Fix KMC_OFFSLAB type cachesBrian Behlendorf2013-07-301-2/+43
| | | | | | | | | | | | | | | Because spl_slab_size() was always returning -ENOSPC for caches of type KMC_OFFSLAB the cache could never be created. Additionally the slab size is rounded up to a page which is what kv_alloc() expects. The kv_alloc() code will minimally allocate a page, in the KMC_OFFSLAB case this could be reduced. The basic regression tests kmem:slab_small, kmem:slab_large, and kmem:slab_align regression were updated to test KMC_OFFSLAB. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ying Zhu <[email protected]> Closes #266
* Linux 3.10 compat: add missing include of linux/slab.hYuxuan Shui2013-07-083-0/+3
| | | | | | | | | Linux kernel commit torvalds/linux@0d01ff2 changes some includes we were depending on through linux/proc_fs.h. Signed-off-by: Yuxuan Shui <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #257
* Refresh links to web siteNed Bass2013-03-0418-18/+18
| | | | | | | Update links to refer to the official ZFS on Linux website instead of @behlendorf's personal fork on github. Signed-off-by: Brian Behlendorf <[email protected]>
* Add spl_kmem_cache_expire module optionBrian Behlendorf2013-01-281-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | Cache aging was implemented because it was part of the default Solaris kmem_cache behavior. The idea is that per-cpu objects which haven't been accessed in several seconds should be returned to the cache. On the other hand Linux slabs never move objects back to the slabs unless there is memory pressure on the system. This behavior is now configurable through the 'spl_kmem_cache_expire' module option. The value is a bit mask with the following meaning. 0x1 - Solaris style cache aging eviction is enabled. 0x2 - Linux style low memory eviction is enabled. Both methods may be safely enabled simultaneously, but by default both are disabled. It has never been clear if the kmem cache aging (which has been around from day one) actually does any good. It has however been the source of numerous bugs so I wouldn't mind retiring it entirely. Signed-off-by: Brian Behlendorf <[email protected]> Closes zfsonlinux/zfs#1227 Closes #210
* call_usermodehelper() should wait for processNed Bass2013-01-092-2/+2
| | | | | | | | | | | As of Linux 3.4 the UMH_WAIT_* constants were renumbered. In particular, the meaning of "1" changed from UMH_WAIT_PROC (wait for process to complete), to UMH_WAIT_EXEC (wait for the exec, but not the process). A number of call sites used the number 1 instead of the constant name, so the behavior was not as expected on kernels with this change. Signed-off-by: Brian Behlendorf <[email protected]>
* splat taskq:front: Reduce stack frameBrian Behlendorf2012-12-121-1/+6
| | | | | | | | | | | | The slightly increased size of the taskq_ent_t when debugging is enabled has pushed the taskq:front splat test over frame size limit. To resolve this dynamically allocate the taskq_ent_t structures so they are part of the heap instead of the stack. In function 'splat_taskq_test6_impl' error: the frame size of 1648 bytes is larger than 1024 bytes Signed-off-by: Brian Behlendorf <[email protected]>
* splat taskq:order: Reduce stack frameBrian Behlendorf2012-12-121-1/+6
| | | | | | | | | | | | The slightly increased size of the taskq_ent_t when debugging is enabled has pushed the taskq:order splat test over frame size limit. To resolve this dynamically allocate the taskq_ent_t structures so they are part of the heap instead of the stack. In function 'splat_taskq_test5_impl' error: the frame size of 1680 bytes is larger than 1024 bytes Signed-off-by: Brian Behlendorf <[email protected]>
* splat taskq:cancel: Add test caseBrian Behlendorf2012-12-121-0/+183
| | | | | | | | | | | | | | | | | | | | | | | | | | Add a test case for taskq_cancel_id() to verify it is working properly. Just like taskq:delay we start by dispatching 100 tasks. However this time 1/3 of the tasks use taskq_dispatch() and will be run immediately, and 2/3 use taskq_dispatch_delay(). The idea is to create a busy taskq with both active, pending, and delayed tasks. After all the items have been successfully dispatched the test begins randomly canceling known task ids. It will do this for 5 seconds randomly canceling a task id and then sleeping for a few milliseconds. The task being canceled may have already run, still be on the pending list, or may be currently being executed by a worker thread. The idea is to ensure we catch any subtle race conditions. Once all the non-canceled tasks have completed we cross check the number of tasks which ran with the number of tasks which were successfully canceled. Additionally, we verify that the taskq_cancel_id() function never blocks longer than needed. This time is bounded by the longest run time of the task which was dispatched. Signed-off-by: Brian Behlendorf <[email protected]>
* splat taskq:delay: Add test caseBrian Behlendorf2012-12-121-11/+113
| | | | | | | | | | | Add a test case for taskq_dispatch_delay() to verify it is working properly. The test dispatchs 100 tasks to a taskq with random expiration times spread over 5 seconds. As each task expires and gets executed by a worker thread it verifies that it was run at the correct time. Once all the delayed tasks have been executed we double check that all the dispatched tasks were successful. Signed-off-by: Brian Behlendorf <[email protected]>
* taskq delay/cancel functionalityBrian Behlendorf2012-12-121-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the ability to dispatch a delayed task to a taskq. The desired behavior is for the task to be queued but not executed by a worker thread until the expiration time is reached. To achieve this two new functions were added. * taskq_dispatch_delay() - This function behaves exactly like taskq_dispatch() however it takes a third 'expire_time' argument. The caller should pass the desired time the task should be executed as an absolute value in jiffies. The task is guarenteed not to run before this time, it may run slightly latter if all the worker threads are busy. * taskq_cancel_id() - Given a task id attempt to cancel the task before it gets executed. This is primarily useful for canceling delay tasks but can be used for canceling any previously dispatched task. There are three possible return values. 0 - The task was found and canceled before it was executed. ENOENT - The task was not found, either it was already run or an invalid task id was supplied by the caller. EBUSY - The task is currently executing any may not be canceled. This function will block until the task has been completed. * taskq_wait_all() - The taskq_wait_id() function was renamed taskq_wait_all() to more clearly reflect its actual behavior. It is only curreny used by the splat taskq regression tests. * taskq_wait_id() - Historically, the only difference between this function and taskq_wait() was that you passed the task id. In both functions you would block until ALL lower task ids which executed. This was semantically correct but could be very slow particularly if there were delay tasks submitted. To better accomidate the delay tasks this function was reimplemnted. It will now only block until the passed task id has been completed. This is actually a fairly low risk change for a few reasons. * Only new ZFS callers will make use of the new interfaces and very little common code was changed to support the new functions. * The existing taskq_wait() implementation was not changed just slightly refactored. * The newly optimized taskq_wait_id() implementation was never used by ZFS we can't accidentally introduce a new bug there. NOTE: This functionality does not exist in the Illumos taskqs. Signed-off-by: Brian Behlendorf <[email protected]>
* splat linux:shrinker: Fix fail-safeSteven Johnson2012-12-121-4/+21
| | | | | | Ensure the fail-safe is reset between successive tests. Signed-off-by: Brian Behlendorf <[email protected]>
* splat linux:shrinker: Fix race conditionSteven Johnson2012-12-121-1/+27
| | | | | | | | | | | | Ensure the test thread blocks until the shrinker has completed its work. This is done by putting the test thread to sleep and waking it each time the shrinker callback runs. Once the shrinker size drops to zero or we time out the test is allowed to proceed. Signed-off-by: Brian Behlendorf <[email protected]> Closes #96 Closes #125 Closes #182
* splat taskq:front: Fix raceSteven Johnson2012-12-051-30/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The taskq:front test has a race condition where task 4 and 8 race to complete, due to an incorrectly calculated set of delay "factors" (T). If task 4 wins and actually finishes first, the verification of the order of completion will fail. The delays calculated to order task completion do not take into account the terminal line in the table, and so are all off by a factor of 1. This causes all the tasks in all queues to finish sooner than expected and the accumulated error is the root cause of tasks 4 and 8 racing to complete first. Before the change the "actual" table looks like I commented in #130. I changed: * the table in the comment to correctly reflect the test and the factor timings needed. * the individual task delay factors of T so that ONLY 1 task will every 2T. (on average) * 1T was reduced from 100ms to 50ms. This halves the duration of the test and makes any remaining raciness more likely to cause failures, but it did not cause the test to fail. * simplified the delay factor logic by using a table look-up instead of a switch. * Added a "task started" message so that with -v it is possible to see the order tasks are started. * Moved the "task completed" message inside the spinlock so that with -v the message truly reflects the absolute order of completion as guaranteed by the spinlock. Signed-off-by: Brian Behlendorf <[email protected]> Closes #130
* Linux compat 3.7, kernel_thread()Brian Behlendorf2012-12-032-66/+61
| | | | | | | | | | | | | | The preferred kernel interface for creating threads has been kthread_create() for a long time now. However, several of the SPLAT tests still use the legacy kernel_thread() function which has finally been dropped (mostly). Update the condvar and rwlock SPLAT tests to use the modern interface. Frankly this is something we should have done a long time ago. Signed-off-by: Brian Behlendorf <[email protected]> Closes #194
* splat kmem:slab_overcommit: DisabledBrian Behlendorf2012-11-061-8/+8
| | | | | | | Disable this test because it may result in an OOM event on the system which can result in the test infrastructure being killed. Signed-off-by: Brian Behlendorf <[email protected]>
* splat atomic:64-bit: Create thread outside spin lockBrian Behlendorf2012-11-061-7/+7
| | | | | | | | | | | | | The Fedora 3.6 debug kernel identified the following issue where we create a thread under a spin lock. This isn't safe because sleeping could result in a deadlock. Therefore the lock is changed to a mutex so it's safe to sleep. BUG: sleeping function called from invalid context at mm/slub.c:930 in_atomic(): 1, irqs_disabled(): 0, pid: 10583, name: splat 1 lock held by splat/10583: Signed-off-by: Brian Behlendorf <[email protected]>