aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorloli10K <[email protected]>2017-06-16 18:07:16 +0200
committerTony Hutter <[email protected]>2017-07-06 15:25:39 -0700
commit94d353a0bf5ee34adb69fe018528b25577457e7a (patch)
tree34bf3756a0e59ecab0f2a1975b29ce3626c72bab
parente9fc1bd5e6a339803f0868be68fc46f6e396e817 (diff)
Fix int overflow in zbookmark_is_before()
When the DSL scan code tries to resume the scrub from the saved zbookmark calls dsl_scan_check_resume()->zbookmark_is_before() to decide if the current dnode still needs to be visited. A subtle int overflow condition in zbookmark_is_before(), exacerbated by bumping the indirect block size to 128K (d7958b4), can lead to the wrong assuption that the dnode does not need to be scanned. This results in scrubs completing "successfully" in matter of mere minutes on pools with several TB of used space because every time we try to resume the dnode traversal on a dataset zbookmark_is_before() tells us the whole objset has already been scanned completely. Fix this by forcing the right shift operator to be executed before the multiplication, as done in zbookmark_compare() (fcff0f3). Signed-off-by: loli10K <[email protected]>
-rw-r--r--module/zfs/zio.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/module/zfs/zio.c b/module/zfs/zio.c
index e06b7da44..f45dfe642 100644
--- a/module/zfs/zio.c
+++ b/module/zfs/zio.c
@@ -3472,7 +3472,7 @@ zbookmark_is_before(const dnode_phys_t *dnp, const zbookmark_phys_t *zb1,
if (zb1->zb_object == DMU_META_DNODE_OBJECT) {
uint64_t nextobj = zb1nextL0 *
- (dnp->dn_datablkszsec << SPA_MINBLOCKSHIFT) >> DNODE_SHIFT;
+ (dnp->dn_datablkszsec << (SPA_MINBLOCKSHIFT - DNODE_SHIFT));
return (nextobj <= zb2thisobj);
}