diff options
author | Clemens Fruhwirth <[email protected]> | 2016-12-17 17:09:57 +0100 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2016-12-19 12:45:24 -0800 |
commit | 8e99d66b0555fe3d6f5b028e8f03883dbf1399bc (patch) | |
tree | ccf568585adb8b548f51d9fbd052c68635add538 /module/spl | |
parent | 6d064f7a07fe8366e113f45931e5f2921dcabda2 (diff) |
Add support for rw semaphore under PREEMPT_RT_FULL
The main complication from the RT patch set is that the RW semaphore
locks change such that read locks on an rwsem can be taken only by
a single thread. All other threads are locked out. This single
thread can take a read lock multiple times though. The underlying
implementation changes to a mutex with an additional read_depth
count.
The implementation can be best understood by inspecting the RT
patch. rwsem_rt.h and rt.c give the best insight into how RT
rwsem works. My implementation for rwsem_tryupgrade is basically
an inversion of rt_downgrade_write found in rt.c. Please see the
comments in the code.
Unfortunately, I have to drop SPLAT rwlock test4 completely as this
test tries to take multiple locks from different threads, which RT
rwsems do not support. Otherwise SPLAT, zconfig.sh, zpios-sanity.sh
and zfs-tests.sh pass on my Debian-testing VM with the kernel
linux-image-4.8.0-1-rt-amd64.
Tested-by: kernelOfTruth <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Clemens Fruhwirth <[email protected]>
Closes zfsonlinux/zfs#5491
Closes #589
Closes #308
Diffstat (limited to 'module/spl')
-rw-r--r-- | module/spl/spl-rwlock.c | 32 |
1 files changed, 31 insertions, 1 deletions
diff --git a/module/spl/spl-rwlock.c b/module/spl/spl-rwlock.c index 77f46f2d6..9e96c4f27 100644 --- a/module/spl/spl-rwlock.c +++ b/module/spl/spl-rwlock.c @@ -32,7 +32,37 @@ #define DEBUG_SUBSYSTEM S_RWLOCK -#if defined(CONFIG_RWSEM_GENERIC_SPINLOCK) +#if defined(CONFIG_PREEMPT_RT_FULL) + +#include <linux/rtmutex.h> + +static int +__rwsem_tryupgrade(struct rw_semaphore *rwsem) +{ + ASSERT(rt_mutex_owner(&rwsem->lock) == current); + + /* + * Under the realtime patch series, rwsem is implemented as a + * single mutex held by readers and writers alike. However, + * this implementation would prevent a thread from taking a + * read lock twice, as the mutex would already be locked on + * the second attempt. Therefore the implementation allows a + * single thread to take a rwsem as read lock multiple times + * tracking that nesting as read_depth counter. + */ + if (rwsem->read_depth <= 1) { + /* + * In case, the current thread has not taken the lock + * more than once as read lock, we can allow an + * upgrade to a write lock. rwsem_rt.h implements + * write locks as read_depth == 0. + */ + rwsem->read_depth = 0; + return (1); + } + return (0); +} +#elif defined(CONFIG_RWSEM_GENERIC_SPINLOCK) static int __rwsem_tryupgrade(struct rw_semaphore *rwsem) { |