diff options
author | Brian Behlendorf <[email protected]> | 2015-09-23 15:59:04 -0700 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2015-09-25 12:45:47 -0700 |
commit | ef5b2e1048eeeb7a81d932d38e52d897b33fca54 (patch) | |
tree | 054c798af341be4b1c2028f46571d196877847a7 /module/zfs/arc.c | |
parent | 04870568e6dae66d79ca144b0dcfa001324c562d (diff) |
Avoid blocking in arc_reclaim_thread()
As described in the comment above arc_reclaim_thread() it's critical
that the reclaim thread be careful about blocking. Just like it must
never wait on a hash lock, it must never wait on a task which can in
turn wait on the CV in arc_get_data_buf(). This will deadlock, see
issue #3822 for full backtraces showing the problem.
To resolve this issue arc_kmem_reap_now() has been updated to use the
asynchronous arc prune function. This means that arc_prune_async()
may now be called while there are still outstanding arc_prune_tasks.
However, this isn't a problem because arc_prune_async() already
keeps a reference count preventing multiple outstanding tasks per
registered consumer. Functionally, this behavior is the same as
the counterpart illumos function dnlc_reduce_cache().
Signed-off-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tim Chase <[email protected]>
Issue #3808
Issue #3834
Issue #3822
Diffstat (limited to 'module/zfs/arc.c')
-rw-r--r-- | module/zfs/arc.c | 15 |
1 files changed, 4 insertions, 11 deletions
diff --git a/module/zfs/arc.c b/module/zfs/arc.c index 7cd4e76f2..b759e6483 100644 --- a/module/zfs/arc.c +++ b/module/zfs/arc.c @@ -2685,8 +2685,8 @@ arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type, } /* - * Helper function for arc_prune() it is responsible for safely handling - * the execution of a registered arc_prune_func_t. + * Helper function for arc_prune_async() it is responsible for safely + * handling the execution of a registered arc_prune_func_t. */ static void arc_prune_task(void *ptr) @@ -2711,7 +2711,7 @@ arc_prune_task(void *ptr) * honor the arc_meta_limit and reclaim otherwise pinned ARC buffers. This * is analogous to dnlc_reduce_cache() but more generic. * - * This operation is performed asyncronously so it may be safely called + * This operation is performed asynchronously so it may be safely called * in the context of the arc_reclaim_thread(). A reference is taken here * for each registered arc_prune_t and the arc_prune_task() is responsible * for releasing it once the registered arc_prune_func_t has completed. @@ -2736,13 +2736,6 @@ arc_prune_async(int64_t adjust) mutex_exit(&arc_prune_mtx); } -static void -arc_prune(int64_t adjust) -{ - arc_prune_async(adjust); - taskq_wait_outstanding(arc_prune_taskq, 0); -} - /* * Evict the specified number of bytes from the state specified, * restricting eviction to the spa and type given. This function @@ -3376,7 +3369,7 @@ arc_kmem_reap_now(void) * We are exceeding our meta-data cache limit. * Prune some entries to release holds on meta-data. */ - arc_prune(zfs_arc_meta_prune); + arc_prune_async(zfs_arc_meta_prune); } for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) { |