summaryrefslogtreecommitdiffstats
path: root/module/spl/spl-kmem-cache.c
Commit message (Collapse)AuthorAgeFilesLines
* Prepare SPL repo to merge with ZFS repoBrian Behlendorf2018-05-291-2/+2
| | | | | | | | | This commit removes everything from the repository except the core SPL implementation for Linux. Those files which remain have been moved to non-conflicting locations to facilitate the merge. The README.md and associated files have been updated accordingly. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix more cstyle warningsBrian Behlendorf2018-02-241-1/+4
| | | | | | | | | | This patch contains no functional changes. It is solely intended to resolve cstyle warnings in order to facilitate moving the spl source code in to the zfs repository. Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed by: George Melikov <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #687
* Fix cstyle warningsBrian Behlendorf2018-02-071-11/+11
| | | | | | | | This patch contains no functional changes. It is solely intended to resolve cstyle warnings in order to facilitate moving the spl source code in to the zfs repository. Signed-off-by: Brian Behlendorf <[email protected]> Closes #681
* Remove all spin_is_locked callsJames Cowgill2017-10-301-4/+0
| | | | | | | | | | On systems with CONFIG_SMP turned off, spin_is_locked always returns false causing these assertions to fail. Remove them as suggested in zfsonlinux/zfs#6558. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: James Cowgill <[email protected]> Closes #665
* Add support for recent kmem_cache_create_usercopyKevin Tanguy2017-01-171-2/+11
| | | | | | | | | | | | | | | | SLAB_USERCOPY flag was used to indicate PAX not to kill copies from kernel to userland. With recent grsecurity patchset and CONFIG_GRKERNSEC_HIDESYM that enables CONFIG_PAX_USERCOPY zfs would panic. Handle newer API while keeping old one functional. Tested-by: RageLtMan <rageltman@sempervictus> Reviewed-by: spendergrsec <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Kevin Tanguy <[email protected]> Closes #595
* Improve spl slab cache allocJinshan Xiong2016-06-011-8/+35
| | | | | | | | | | | | | The policy is to try to allocate with KM_NOSLEEP, which will lead to memory allocation with GFP_ATOMIC, and if it fails, it will launch an taskq to expand slab space. This way it should be able to get better NUMA memory locality and reduce the overhead of context switch. Signed-off-by: Jinshan Xiong <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #551
* Use spl_fstrans_mark instead of memalloc_noio_saveChunwei Chen2015-12-181-7/+2
| | | | | | | | | | | | | | | | | | | For earlier versions of the kernel with memalloc_noio_save, it only turns off __GFP_IO but leaves __GFP_FS untouched during direct reclaim. This would cause threads to direct reclaim into ZFS and cause deadlock. Instead, we should stick to using spl_fstrans_mark. Since we would explicitly turn off both __GFP_IO and __GFP_FS before allocation, it will work on every version of the kernel. This impacts kernel versions 3.9-3.17, see upstream kernel commit torvalds/linux@934f307 for reference. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #515 Issue zfsonlinux/zfs#4111
* spl-kmem-cache: include linux/prefetch.h for prefetchw()Dimitri John Ledkov2015-12-021-0/+1
| | | | | | | | This is needed for architectures that do not have a builtin prefetchw() Signed-off-by: Dimitri John Ledkov <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #502
* Remove superfluous `newline` characterloli10K2015-11-131-1/+1
| | | | | | | | | Remove superfluous `newline` character from spl_kmem_cache_magazine_size module parameter description. Signed-off-by: loli10K <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #499
* Don't call kmem_cache_shrink from shrinkerChunwei Chen2015-11-111-6/+1
| | | | | | | | | | | | | Linux slab will automatically free empty slab when number of partial slab is over min_partial, so we don't need to explicitly shrink it. In fact, calling kmem_cache_shrink from shrinker will cause heavy contention on kmem_cache_node->list_lock, to the point that it might cause __slab_free to livelock (see zfsonlinux/zfs#3936) Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes zfsonlinux/zfs#3936 Closes #487
* Fix CPU hotplugBrian Behlendorf2015-10-131-8/+7
| | | | | | | | | | | | | | | | Allocate a kmem cache magazine for every possible CPU which might be added to the system. This ensures that when one of these CPUs is enabled it can be safely used immediately. For many systems the number of online CPUs is identical to the number of present CPUs so this does imply an increased memory footprint. In fact, dynamically allocating the array of magazine pointers instead of using the worst case NR_CPUS can end up decreasing our memory footprint. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Closes #482
* Fix PAX Patch/Grsec SLAB_USERCOPY panicBrian Behlendorf2015-09-281-1/+11
| | | | | | | | | | | | | | | Support grsecurity/PaX kernel configurations where CONFIG_PAX_USERCOPY_SLABS are enabled. When this kernel option is enabled slabs which are used to copy between user and kernel space must be created with SLAB_USERCOPY. Stock Linux kernels do not have a SLAB_USERCOPY definition so this causes no change in behavior for non-PAX-enabled kernels. Verified-by: Wuffleton <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #2977 Issue #3796
* Invert minclsyspri and maxclsyspriBrian Behlendorf2015-07-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | On Linux the meaning of a processes priority is inverted with respect to illumos. High values on Linux indicate a _low_ priority while high value on illumos indicate a _high_ priority. In order to preserve the logical meaning of the minclsyspri and maxclsyspri macros when they are used by the illumos wrapper functions their values have been inverted. This way when changes are merged from upstream illumos we won't need to remember to invert the macro. It could also lead to confusion. Note this change also reverts some of the priorities changes in prior commit 62aa81a. The rational is as follows: spl_kmem_cache - High priority may result in blocked memory allocs spl_system_taskq - May perform I/O for file backed VDEVs spl_dynamic_taskq - New taskq threads should be spawned promptly Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Ned Bass <[email protected]> Issue zfsonlinux/zfs#3607
* Remove skc_ref from alloc/free pathsBrian Behlendorf2015-07-241-9/+2
| | | | | | | | | | | | | | | | As described in spl_kmem_cache_destroy() the ->skc_ref count was added to address the case of a cache reap or grow racing with a destroy. They are not strictly needed in the alloc/free paths because consumers of the cache are responsible for not using it while it's being destroyed. Removing this code is desirable because there is some evidence that contention on this atomic negative impacts performance on large-scale NUMA systems. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Issue #463
* Add defclsyspri macroBrian Behlendorf2015-07-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Add a new defclsyspri macro which can be used to request the default Linux scheduler priority. Neither the minclsyspri or maxclsyspri map to the default Linux kernel thread priority. This makes it awkward to create taskqs which run with the same priority as the rest of the kernel threads on the system which can lead to performance issues. All SPL callers which previously used minclsyspri or maxclsyspri have been changed to use defclsyspri. The vast majority of callers were part of the test suite which won't have an external impact. The few places where it could impact performance the change was from maxclsyspri to defclsyspri. This makes it more likely the process will be scheduled which may help performance. To facilitate further performance analysis the spl_taskq_thread_priority module option has been added. When disabled (0) all newly created kernel threads will use the default kernel thread priority. When enabled (1) the specified taskq priority will be used. By default this value is enabled (1). Signed-off-by: Brian Behlendorf <[email protected]>
* Set TASKQ_DYNAMIC for kmem and system taskqsBrian Behlendorf2015-06-241-1/+3
| | | | | | | | | | | | | | | | | | | Add the TASKQ_DYNAMIC flag to the kmem_cache and system taskqs to reduce the number of idle threads on the system. Additional threads will be created on demand up to the previous maximum thread counts. This should have minimal, if any, impact on performance. This makes the system taskq consistent with illumos which is always created as a dynamic taskq with up to 64 threads. The task limits for the kmem_cache have been increased to avoid any unnessisary throttling and to keep a larger reserve of task_t structures on the free list. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes #458
* Don't allow shrinking a PF_FSTRANS contextTim Chase2015-04-031-0/+6
| | | | | | | | | | | | | | Avoid deadlocks when entering the shrinker from a PF_FSTRANS context. This patch also reverts commit d0d5dd7 which added MUTEX_FSTRANS. Its use has been deprecated within ZFS as it was an ineffective mechanism to eliminate deadlocks. Among other things, it introduced the need for strict ordering of mutex locking and unlocking in order that the PF_FSTRANS flag wouldn't set incorrectly. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #446
* Use __get_free_pages() for emergency objectsBrian Behlendorf2015-01-161-10/+12
| | | | | | | | | | | The __get_free_pages() function must be used in place of kmalloc() to ensure the __GFP_COMP is strictly honored. This is due to kmalloc() being layered on the generic Linux slab caches. It wasn't until recently that all caches were created using __GFP_COMP. This means that it is possible for a kmalloc() which passed the __GFP_COMP flag to be returned a non-compound allocation. Signed-off-by: Brian Behlendorf <[email protected]>
* Fix kmem cache deadlock logicBrian Behlendorf2015-01-161-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kmem cache implementation always adds new slabs by dispatching a task to the spl_kmem_cache taskq to perform the allocation. This is done because large slabs must be allocated using vmalloc(). It is possible these allocations will block on IO because the GFP_NOIO flag is not honored. This can result in a deadlock. Therefore, a deadlock detection strategy was implemented to deal with this case. When it is determined, by timeout, that the spl_kmem_cache thread has deadlocked attempting to add a new slab. Then all callers attempting to allocate from the cache fall back to using kmalloc() which does honor all passed flags. This logic was correct but an optimization in the code allowed for a deadlock. Because only slabs backed by vmalloc() can deadlock in the way described above. An optimization was made to only invoke this deadlock detection code for vmalloc() backed caches. This had the advantage of making it easy to distinguish these objects when they were freed. But this isn't strictly safe. If all the spl_kmem_cache threads end up deadlocked than we can't grow any of the other caches either. This can once again result in a deadlock if memory needs to be allocated from one of these other caches to ensure forward progress. The fix here is to remove the optimization which limits this fall back allocation stratagy to vmalloc() backed caches. Doing this means we may need to take the cache lock in spl_kmem_cache_free() call path. But this small cost can be mitigated by ignoring objects with virtual addresses. For good measure the default number of spl_kmem_cache threads has been increased from 1 to 4, and made tunable. This alone wouldn't resolve the original issue since it's still possible for all the threads to be deadlocked. However, it does help responsiveness by ensuring that a single deadlocked spl_kmem_cache thread doesn't block allocations from other caches until the timeout is reached. Signed-off-by: Brian Behlendorf <[email protected]>
* Refine slab cache sizingBrian Behlendorf2015-01-161-35/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change is designed to improve the memory utilization of slabs by more carefully setting their size. The way the code currently works is problematic for slabs which contain large objects (>1MB). This is due to slabs being unconditionally rounded up to a power of two which may result in unused space at the end of the slab. The reason the existing code rounds up every slab is because it assumes it will backed by the buddy allocator. Since the buddy allocator can only performs power of two allocations this is desirable because it avoids wasting any space. However, this logic breaks down if slab is backed by vmalloc() which operates at a page level granularity. In this case, the optimal thing to do is calculate the minimum required slab size given certain constraints (object size, alignment, objects/slab, etc). Therefore, this patch reworks the spl_slab_size() function so that it sizes KMC_KMEM slabs differently than KMC_VMEM slabs. KMC_KMEM slabs are rounded up to the nearest power of two, and KMC_VMEM slabs are allowed to be the minimum required size. This change also reduces the default number of objects per slab. This reduces how much memory a single cache object can pin, which can result in significant memory saving for highly fragmented caches. But depending on the workload it may result in slabs being allocated and freed more frequently. In practice, this has been shown to be a better default for most workloads. Also the maximum slab size has been reduced to 4MB on 32-bit systems. Due to the limited virtual address space it's critical the we be as frugal as possible. A limit of 4M still lets us reasonably comfortably allocate a limited number of 1MB objects. Finally, the kmem:slab_small and kmem:slab_large SPLAT tests were extended to provide better test coverage of various object sizes and alignments. Caches are created with random parameters and their basic functionality is verified by allocating several slabs worth of objects. Signed-off-by: Brian Behlendorf <[email protected]>
* Reduce kmem cache deadlock thresholdBrian Behlendorf2015-01-161-1/+1
| | | | | | | | | Reduce the threshold for detecting a kmem cache deadlock by 10x from HZ to HZ/10. The reduced value is still several orders of magnitude large enough to avoid being triggered incorrectly. By reducing it we allow the system to resolve the issue more quickly. Signed-off-by: Brian Behlendorf <[email protected]>
* Make slab reclaim more aggressiveBrian Behlendorf2015-01-161-29/+47
| | | | | | | | | | Many people have noticed that the kmem cache implementation is slow to release its memory. This patch makes the reclaim behavior more aggressive by immediately freeing a slab once it is empty. Unused objects which are cached in the magazines will still prevent a slab from being freed. Signed-off-by: Brian Behlendorf <[email protected]>
* Enforce architecture-specific barriers around clear_bit()Richard Yao2015-01-161-3/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The comment above the Linux 3.16 kernel's clear_bit() states: /** * clear_bit - Clears a bit in memory * @nr: Bit to clear * @addr: Address to start counting from * * clear_bit() is atomic and may not be reordered. However, it does * not contain a memory barrier, so if it is used for locking purposes, * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic() * in order to ensure changes are visible on other processors. */ This comment does not make sense in the context of x86 because x86 maps the operations to barrier(), which is a compiler barrier. However, it does make sense to me when I consider architectures that reorder around atomic instructions. In such situations, a processor is allowed to execute the wake_up_bit() before clear_bit() and we have a race. There are a few architectures that suffer from this issue. In such situations, the other processor would wake-up, see the bit is still taken and go to sleep, while the one responsible for waking it up will assume that it did its job and continue. This patch implements a wrapper that maps smp_mb__{before,after}_atomic() to smp_mb__{before,after}_clear_bit() on older kernels and changes our code to leverage it in a manner consistent with the mainline kernel. Signed-off-by: Brian Behlendorf <[email protected]>
* Add hooks for disabling direct reclaimRichard Yao2015-01-161-1/+3
| | | | | | | | | | | | | | | | | | | | | The port of XFS to Linux introduced a thread-specific PF_FSTRANS bit that is used to mark contexts which are processing transactions. When set, allocations in this context can dip into kernel memory reserves to avoid deadlocks during writeback. Linux 3.9 provided the additional PF_MEMALLOC_NOIO for disabling __GFP_IO in page allocations, which XFS began using in 3.15. This patch implements hooks for marking transactions via PF_FSTRANS. When an allocation is performed in the context of PF_FSTRANS, any KM_SLEEP allocation is transparently converted to a GFP_NOIO allocation. Additionally, when using a Linux 3.9 or newer kernel, it will set PF_MEMALLOC_NOIO to prevent direct reclaim from entering pageout() on on any KM_PUSHPAGE or KM_NOSLEEP allocation. This effectively allows the spl_vmalloc() helper function to be used safely in a thread which is responsible for IO. Signed-off-by: Brian Behlendorf <[email protected]>
* Refactor generic memory allocation interfacesBrian Behlendorf2015-01-161-48/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch achieves the following goals: 1. It replaces the preprocessor kmem flag to gfp flag mapping with proper translation logic. This eliminates the potential for surprises that were previously possible where kmem flags were mapped to gfp flags. 2. It maps vmem_alloc() allocations to kmem_alloc() for allocations sized less than or equal to the newly-added spl_kmem_alloc_max parameter. This ensures that small allocations will not contend on a single global lock, large allocations can still be handled, and potentially limited virtual address space will not be squandered. This behavior is entirely different than under Illumos due to different memory management strategies employed by the respective kernels. However, this functionally provides the semantics required. 3. The --disable-debug-kmem, --enable-debug-kmem (default), and --enable-debug-kmem-tracking allocators have been unified in to a single spl_kmem_alloc_impl() allocation function. This was done to simplify the code and make it more maintainable. 4. Improve portability by exposing an implementation of the memory allocations functions that can be safely used in the same way they are used on Illumos. Specifically, callers may safely use KM_SLEEP in contexts which perform filesystem IO. This allows us to eliminate an entire class of Linux specific changes which were previously required to avoid deadlocking the system. This change will be largely transparent to existing callers but there are a few caveats: 1. Because the headers were refactored and extraneous includes removed callers may find they need to explicitly add additional #includes. In particular, kmem_cache.h must now be explicitly includes to access the SPL's kmem cache implementation. This behavior is different from Illumos but it was done to avoid always masking the Linux slab functions when kmem.h is included. 2. Callers, like Lustre, which made assumptions about the definitions of KM_SLEEP, KM_NOSLEEP, and KM_PUSHPAGE will need to be updated. Other callers such as ZFS which did not will not require changes. 3. KM_PUSHPAGE is no longer overloaded to imply GFP_NOIO. It retains its original meaning of allowing allocations to access reserved memory. KM_PUSHPAGE callers can be converted back to KM_SLEEP. 4. The KM_NODEBUG flags has been retired and the default warning threshold increased to 32k. 5. The kmem_virt() functions has been removed. For callers which need to distinguish between a physical and virtual address use is_vmalloc_addr(). Signed-off-by: Brian Behlendorf <[email protected]>
* Fix kmem cstyle issuesBrian Behlendorf2015-01-161-76/+92
| | | | | | | | Address all cstyle issues in the kmem, vmem, and kmem_cache source and headers. This will done to make it easier to review subsequent changes which will rework the kmem/vmem implementation. Signed-off-by: Brian Behlendorf <[email protected]>
* Refactor existing codeBrian Behlendorf2015-01-161-0/+1648
This change introduces no functional changes to the memory management interfaces. It only restructures the existing codes by separating the kmem, vmem, and kmem cache implementations in the separate source and header files. Splitting this functionality in to separate files required the addition of spl_vmem_{init,fini}() and spl_kmem_cache_{initi,fini}() functions. Additionally, several minor changes to the #include's were required to accommodate the removal of extraneous header from kmem.h. But again, while large this patch introduces no functional changes. Signed-off-by: Brian Behlendorf <[email protected]>