diff options
author | Alexander Motin <[email protected]> | 2020-02-13 14:20:42 -0500 |
---|---|---|
committer | GitHub <[email protected]> | 2020-02-13 11:20:42 -0800 |
commit | 465e4e795ee3cbdc5de862b26d81b2f1116733df (patch) | |
tree | 0848a222d5ae27a1a623856540aca148ff3c031a /module/zfs/dnode.c | |
parent | 610eec452d723bc53ce531095aff9577a2e0dc93 (diff) |
Remove duplicate dbufs accounting
Since AVL already has embedded element counter, use dn_dbufs_count
only for dbufs not counted there (bonus buffers) and just add them.
This removes two atomics per dbuf life cycle.
According to profiler it reduces time spent by dbuf_destroy() inside
bottlenecked dbuf_evict_thread() from 13.36% to 9.20% of the core.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Matt Ahrens <[email protected]>
Signed-off-by: Alexander Motin <[email protected]>
Sponsored-By: iXsystems, Inc.
Closes #9949
Diffstat (limited to 'module/zfs/dnode.c')
-rw-r--r-- | module/zfs/dnode.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/module/zfs/dnode.c b/module/zfs/dnode.c index 167ab8677..3116a59bb 100644 --- a/module/zfs/dnode.c +++ b/module/zfs/dnode.c @@ -1004,7 +1004,7 @@ dnode_move(void *buf, void *newbuf, size_t size, void *arg) */ refcount = zfs_refcount_count(&odn->dn_holds); ASSERT(refcount >= 0); - dbufs = odn->dn_dbufs_count; + dbufs = DN_DBUFS_COUNT(odn); /* We can't have more dbufs than dnode holds. */ ASSERT3U(dbufs, <=, refcount); @@ -1031,7 +1031,7 @@ dnode_move(void *buf, void *newbuf, size_t size, void *arg) list_link_replace(&odn->dn_link, &ndn->dn_link); /* If the dnode was safe to move, the refcount cannot have changed. */ ASSERT(refcount == zfs_refcount_count(&ndn->dn_holds)); - ASSERT(dbufs == ndn->dn_dbufs_count); + ASSERT(dbufs == DN_DBUFS_COUNT(ndn)); zrl_exit(&ndn->dn_handle->dnh_zrlock); /* handle has moved */ mutex_exit(&os->os_lock); |