diff options
author | Ned Bass <[email protected]> | 2010-08-10 11:01:46 -0700 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2010-08-10 16:43:00 -0700 |
commit | 46aa7b3939bbbac86d2a4cfc556b33398ec12d08 (patch) | |
tree | 6d1baeb8ed46202d5dd7ba64d604eb02cc3c0e23 /config | |
parent | 5ec44a37c3857b178a958352d63c5367133526e1 (diff) |
Correctly handle rwsem_is_locked() behavior
A race condition in rwsem_is_locked() was fixed in Linux 2.6.33 and the fix was
backported to RHEL5 as of kernel 2.6.18-190.el5. Details can be found here:
https://bugzilla.redhat.com/show_bug.cgi?id=526092
The race condition was fixed in the kernel by acquiring the semaphore's
wait_lock inside rwsem_is_locked(). The SPL worked around the race condition
by acquiring the wait_lock before calling that function, but with the fix in
place it must not do that.
This commit implements an autoconf test to detect whether the fixed version of
rwsem_is_locked() is present. The previous version of rwsem_is_locked() was an
inline static function while the new version is exported as a symbol which we
can check for in module.symvers. Depending on the result we correctly
implement the needed compatibility macros for proper spinlock handling.
Finally, we do the right thing with spin locks in RW_*_HELD() by using the
new compatibility macros. We only only acquire the semaphore's wait_lock if
it is calling a rwsem_is_locked() that does not itself try to acquire the lock.
Some new overhead and a small harmless race is introduced by this change.
This is because RW_READ_HELD() and RW_WRITE_HELD() now acquire and release
the wait_lock twice: once for the call to rwsem_is_locked() and once for
the call to rw_owner(). This can't be avoided if calling a rwsem_is_locked()
that takes the wait_lock, as it will in more recent kernels.
The other case which only occurs in legacy kernels could be optimized by
taking the lock only once, as was done prior to this commit. However, I
decided that the performance gain probably wasn't significant enough to
justify the messy special cases required.
The function spl_rw_get_owner() was only used to enable the afore-mentioned
optimization. Since it is no longer used, I removed it.
Signed-off-by: Brian Behlendorf <[email protected]>
Diffstat (limited to 'config')
-rw-r--r-- | config/spl-build.m4 | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/config/spl-build.m4 b/config/spl-build.m4 index facaf7404..0b9f8f430 100644 --- a/config/spl-build.m4 +++ b/config/spl-build.m4 @@ -77,6 +77,7 @@ AC_DEFUN([SPL_AC_CONFIG_KERNEL], [ SPL_AC_5ARGS_PROC_HANDLER SPL_AC_KVASPRINTF SPL_AC_3ARGS_FILE_FSYNC + SPL_AC_EXPORTED_RWSEM_IS_LOCKED ]) AC_DEFUN([SPL_AC_MODULE_SYMVERS], [ @@ -1598,3 +1599,19 @@ AC_DEFUN([SPL_AC_3ARGS_FILE_FSYNC], [ AC_MSG_RESULT(no) ]) ]) + +dnl # +dnl # 2.6.33 API change. Also backported in RHEL5 as of 2.6.18-190.el5. +dnl # Earlier versions of rwsem_is_locked() were inline and had a race +dnl # condition. The fixed version is exported as a symbol. The race +dnl # condition is fixed by acquiring sem->wait_lock, so we must not +dnl # call that version while holding sem->wait_lock. +dnl # +AC_DEFUN([SPL_AC_EXPORTED_RWSEM_IS_LOCKED], [ + SPL_CHECK_SYMBOL_EXPORT( + [rwsem_is_locked], + [lib/rwsem-spinlock.c], + [AC_DEFINE(RWSEM_IS_LOCKED_TAKES_WAIT_LOCK, 1, + [rwsem_is_locked() acquires sem->wait_lock])], + []) +]) |