aboutsummaryrefslogtreecommitdiffstats
path: root/include/linux/rwsem_compat.h
Commit message (Collapse)AuthorAgeFilesLines
* Implement a proper rw_tryupgradeChunwei Chen2016-05-311-0/+17
| | | | | | | | | | | | | | | | | | Current rw_tryupgrade does rw_exit and then rw_tryenter(RW_RWITER), and then does rw_enter(RW_READER) if it fails. This violate the assumption that rw_tryupgrade should be atomic and could cause extra contention or even lock inversion. This patch we implement a proper rw_tryupgrade. For rwsem-spinlock, we take the spinlock to check rwsem->count and rwsem->wait_list. For normal rwsem, we use cmpxchg on rwsem->count to change the value from single reader to single writer. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Tim Chase <[email protected]> Closes zfsonlinux/zfs#4692 Closes #554
* Retire rwsem_is_locked() compatBrian Behlendorf2015-06-101-24/+0
| | | | | | | | | | | | | | | Stock Linux 2.6.32 and earlier kernels contained a broken version of rwsem_is_locked() which could return an incorrect value. Because of this compatibility code was added to detect the broken implementation and replace it with our own if needed. The fix for this issue was merged in to the mainline Linux kernel as of 2.6.33 and the major enterprise distributions based on 2.6.32 have all backported the fix. Therefore there is no longer a need to carry this code and it can be removed. Signed-off-by: Brian Behlendorf <[email protected]> Closes #454
* Refresh links to web siteNed Bass2013-03-041-1/+1
| | | | | | | Update links to refer to the official ZFS on Linux website instead of @behlendorf's personal fork on github. Signed-off-by: Brian Behlendorf <[email protected]>
* Optimize spl_rwsem_is_locked()Brian Behlendorf2012-07-131-42/+25
| | | | | | | | | | | | | | | | | The spl_rwsem_is_locked() compatibility function has been observed to be a hot spot. The root cause of this is that we must check the rwsem activity under the rwsem->wait_lock to avoid a race. When the lock is busy significant contention can occur. The upstream kernel fix for this race had the insight that by using spin_trylock_irqsave() this contention could be avoided. When the lock is contended it's reasonable to return that it is locked. This change updates the SPLs implemention to be like the upstream kernel. Since the kernel code has been in use for years now this a low risk change. Signed-off-by: Brian Behlendorf <[email protected]>
* Linux 3.2 compat: rw_semaphore.wait_lock is rawDarik Horn2012-01-111-8/+28
| | | | | | | | | | | | | | | | | | The wait_lock member of the rw_semaphore struct became a raw_spinlock_t in Linux 3.2 at torvalds/linux@ddb6c9b58a19edcfac93ac670b066c836ff729f1. Wrap spin_lock_* function calls in a new spl_rwsem_* interface to ensure type safety if raw_spinlock_t becomes architecture specific, and to satisfy these compiler warnings: warning: passing argument 1 of ‘spinlock_check’ from incompatible pointer type [enabled by default] note: expected ‘struct spinlock_t *’ but argument is of type ‘struct raw_spinlock_t *’ Signed-off-by: Brian Behlendorf <[email protected]> Closes: #76 Closes: zfsonlinux/zfs#463
* Correctly handle rwsem_is_locked() behaviorNed Bass2010-08-101-0/+63
A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=526092 The race condition was fixed in the kernel by acquiring the semaphore's wait_lock inside rwsem_is_locked(). The SPL worked around the race condition by acquiring the wait_lock before calling that function, but with the fix in place it must not do that. This commit implements an autoconf test to detect whether the fixed version of rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an inline static function while the new version is exported as a symbol which we can check for in module.symvers. Depending on the result we correctly implement the needed compatibility macros for proper spinlock handling. Finally, we do the right thing with spin locks in RW_*_HELD() by using the new compatibility macros. We only only acquire the semaphore's wait_lock if it is calling a rwsem_is_locked() that does not itself try to acquire the lock. Some new overhead and a small harmless race is introduced by this change. This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release the wait_lock twice: once for the call to rwsem_is_locked() and once for the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked() that takes the wait_lock, as it will in more recent kernels. The other case which only occurs in legacy kernels could be optimized by taking the lock only once, as was done prior to this commit. However, I decided that the performance gain probably wasn't significant enough to justify the messy special cases required. The function spl_rw_get_owner() was only used to enable the afore-mentioned optimization. Since it is no longer used, I removed it. Signed-off-by: Brian Behlendorf <[email protected]>