aboutsummaryrefslogtreecommitdiffstats
path: root/include/sys/zfs_znode.h
diff options
context:
space:
mode:
authorGregor Kopka <[email protected]>2018-09-26 01:29:16 +0200
committerBrian Behlendorf <[email protected]>2018-09-25 16:29:16 -0700
commitb954e36e512171d94637c709023e4d763b655d91 (patch)
tree61530eb581005bf024216a2a6177f3672cbbc1fc /include/sys/zfs_znode.h
parenta7165d7255b71fea5a4b2431ccf915ee4099d613 (diff)
Zpool iostat: remove latency/queue scaling
Bandwidth and iops are average per second while *_wait are averages per request for latency or, for queue depths, an instantaneous measurement at the end of an interval (according to man zpool). When calculating the first two it makes sense to do x/interval_duration (x being the increase in total bytes or number of requests over the duration of the interval, interval_duration in seconds) to 'scale' from amount/interval_duration to amount/second. But applying the same math for the latter (*_wait latencies/queue) is wrong as there is no interval_duration component in the values (these are time/requests to get to average_time/request or already an absulute number). This bug leads to the only correct continuous *_wait figures for both latencies and queue depths from 'zpool iostat -l/q' being with duration=1 as then the wrong math cancels itself (x/1 is a nop). This removes temporal scaling from latency and queue depth figures. Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Gregor Kopka <[email protected]> Closes #7945 Closes #7694
Diffstat (limited to 'include/sys/zfs_znode.h')
0 files changed, 0 insertions, 0 deletions