summaryrefslogtreecommitdiffstats
path: root/module
diff options
context:
space:
mode:
authorEtienne Dechamps <[email protected]>2012-08-01 10:29:59 +0200
committerBrian Behlendorf <[email protected]>2012-08-07 14:55:31 -0700
commitee5fd0bb80d68ef095f831784cbb17181b2ba898 (patch)
treee2437a510eaa09db0af6429e1aa94d2d44b35182 /module
parent9a512dca97fec1afa5068b53621ce1dd7dbef578 (diff)
Set zvol discard_granularity to the volblocksize.
Currently, zvols have a discard granularity set to 0, which suggests to the upper layer that discard requests of arbirarily small size and alignment can be made efficiently. In practice however, ZFS does not handle unaligned discard requests efficiently: indeed, it is unable to free a part of a block. It will write zeros to the specified range instead, which is both useless and inefficient (see dnode_free_range). With this patch, zvol block devices expose volblocksize as their discard granularity, so the upper layer is aware that it's not supposed to send discard requests smaller than volblocksize. Signed-off-by: Brian Behlendorf <[email protected]> Closes #862
Diffstat (limited to 'module')
-rw-r--r--module/zfs/zvol.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/module/zfs/zvol.c b/module/zfs/zvol.c
index 32e4f3c1e..07bda6dba 100644
--- a/module/zfs/zvol.c
+++ b/module/zfs/zvol.c
@@ -1245,6 +1245,7 @@ __zvol_create_minor(const char *name)
#ifdef HAVE_BLK_QUEUE_DISCARD
blk_queue_max_discard_sectors(zv->zv_queue,
(zvol_max_discard_blocks * zv->zv_volblocksize) >> 9);
+ blk_queue_discard_granularity(zv->zv_queue, zv->zv_volblocksize);
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, zv->zv_queue);
#endif
#ifdef HAVE_BLK_QUEUE_NONROT