summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorEtienne Dechamps <[email protected]>2012-02-08 22:41:41 +0100
committerBrian Behlendorf <[email protected]>2012-02-08 13:58:10 -0800
commitdde9380a1bf9084d0c8a3e073cdd65bb81db1a23 (patch)
treebbef2d9ddfade142762d92379ef72c43e223fe61
parent34037afe24e0bff97cf5262f8f1a76f5e0815dc1 (diff)
Use 32 as the default number of zvol threads.
Currently, the `zvol_threads` variable, which controls the number of worker threads which process items from the ZVOL queues, is set to the number of available CPUs. This choice seems to be based on the assumption that ZVOL threads are CPU-bound. This is not necessarily true, especially for synchronous writes. Consider the situation described in the comments for `zil_commit()`, which is called inside `zvol_write()` for synchronous writes: > itxs are committed in batches. In a heavily stressed zil there will be a > commit writer thread who is writing out a bunch of itxs to the log for a > set of committing threads (cthreads) in the same batch as the writer. > Those cthreads are all waiting on the same cv for that batch. > > There will also be a different and growing batch of threads that are > waiting to commit (qthreads). When the committing batch completes a > transition occurs such that the cthreads exit and the qthreads become > cthreads. One of the new cthreads becomes he writer thread for the batch. > Any new threads arriving become new qthreads. We can easily deduce that, in the case of ZVOLs, there can be a maximum of `zvol_threads` cthreads and qthreads. The default value for `zvol_threads` is typically between 1 and 8, which is way too low in this case. This means there will be a lot of small commits to the ZIL, which is very inefficient compared to a few big commits, especially since we have to wait for the data to be on stable storage. Increasing the number of threads will increase the amount of data waiting to be commited and thus the size of the individual commits. On my system, in the context of VM disk image storage (lots of small synchronous writes), increasing `zvol_threads` from 8 to 32 results in a 50% increase in sequential synchronous write performance. We should choose a more sensible default for `zvol_threads`. Unfortunately the optimal value is difficult to determine automatically, since it depends on the synchronous write latency of the underlying storage devices. In any case, a hardcoded value of 32 would probably be better than the current situation. Having a lot of ZVOL threads doesn't seem to have any real downside anyway. Signed-off-by: Brian Behlendorf <[email protected]> Fixes #392
-rw-r--r--module/zfs/zvol.c5
1 files changed, 1 insertions, 4 deletions
diff --git a/module/zfs/zvol.c b/module/zfs/zvol.c
index 1636581d5..19888ea96 100644
--- a/module/zfs/zvol.c
+++ b/module/zfs/zvol.c
@@ -47,7 +47,7 @@
#include <linux/blkdev_compat.h>
unsigned int zvol_major = ZVOL_MAJOR;
-unsigned int zvol_threads = 0;
+unsigned int zvol_threads = 32;
static taskq_t *zvol_taskq;
static kmutex_t zvol_state_lock;
@@ -1343,9 +1343,6 @@ zvol_init(void)
{
int error;
- if (!zvol_threads)
- zvol_threads = num_online_cpus();
-
zvol_taskq = taskq_create(ZVOL_DRIVER, zvol_threads, maxclsyspri,
zvol_threads, INT_MAX, TASKQ_PREPOPULATE);
if (zvol_taskq == NULL) {