diff options
author | Etienne Dechamps <[email protected]> | 2012-08-01 10:29:59 +0200 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2012-08-07 14:55:31 -0700 |
commit | ee5fd0bb80d68ef095f831784cbb17181b2ba898 (patch) | |
tree | e2437a510eaa09db0af6429e1aa94d2d44b35182 /include/linux/blkdev_compat.h | |
parent | 9a512dca97fec1afa5068b53621ce1dd7dbef578 (diff) |
Set zvol discard_granularity to the volblocksize.
Currently, zvols have a discard granularity set to 0, which suggests to
the upper layer that discard requests of arbirarily small size and
alignment can be made efficiently.
In practice however, ZFS does not handle unaligned discard requests
efficiently: indeed, it is unable to free a part of a block. It will
write zeros to the specified range instead, which is both useless and
inefficient (see dnode_free_range).
With this patch, zvol block devices expose volblocksize as their discard
granularity, so the upper layer is aware that it's not supposed to send
discard requests smaller than volblocksize.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #862
Diffstat (limited to 'include/linux/blkdev_compat.h')
-rw-r--r-- | include/linux/blkdev_compat.h | 15 |
1 files changed, 15 insertions, 0 deletions
diff --git a/include/linux/blkdev_compat.h b/include/linux/blkdev_compat.h index bd1b2bf54..a5294ceba 100644 --- a/include/linux/blkdev_compat.h +++ b/include/linux/blkdev_compat.h @@ -433,6 +433,21 @@ bio_set_flags_failfast(struct block_device *bdev, int *flags) #endif /* + * 2.6.33 API change + * Discard granularity and alignment restrictions may now be set. For + * older kernels which do not support this it is safe to skip it. + */ +#ifdef HAVE_DISCARD_GRANULARITY +static inline void +blk_queue_discard_granularity(struct request_queue *q, unsigned int dg) +{ + q->limits.discard_granularity = dg; +} +#else +#define blk_queue_discard_granularity(x, dg) ((void)0) +#endif /* HAVE_DISCARD_GRANULARITY */ + +/* * Default Linux IO Scheduler, * Setting the scheduler to noop will allow the Linux IO scheduler to * still perform front and back merging, while leaving the request |