summaryrefslogtreecommitdiffstats
path: root/cmd/zdb
Commit message (Collapse)AuthorAgeFilesLines
* Fix coverity defects: 147480, 147584Tobin Harding2017-10-161-4/+4
| | | | | | | | | | | | | | | | | | | | | CID 147480: Logically dead code (DEADCODE) Remove non-null check and subsequent function call. Add ASSERT to future proof the code. usage label is only jumped to before `zhp` is initialized. CID 147584: Out-of-bounds access (OVERRUN) Subtract length of current string from buffer length for `size` argument to `snprintf`. Starting address for the write is the start of the buffer + the current string length. We need to subtract this string length else risk a buffer overflow. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Tobin C. Harding <[email protected]> Closes #6745
* Improved dnode allocation and dmu_hold_impl()Olaf Faaland2017-09-051-7/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor dmu_object_alloc_dnsize() and dnode_hold_impl() to simplify the code, fix errors introduced by commit dbeb879 (PR #6117) interacting badly with large dnodes, and improve performance. * When allocating a new dnode in dmu_object_alloc_dnsize(), update the percpu object ID for the core's metadnode chunk immediately. This eliminates most lock contention when taking the hold and creating the dnode. * Correct detection of the chunk boundary to work properly with large dnodes. * Separate the dmu_hold_impl() code for the FREE case from the code for the ALLOCATED case to make it easier to read. * Fully populate the dnode handle array immediately after reading a block of the metadnode from disk. Subsequently the dnode handle array provides enough information to determine which dnode slots are in use and which are free. * Add several kstats to allow the behavior of the code to be examined. * Verify dnode packing in large_dnode_008_pos.ksh. Since the test is purely creates, it should leave very few holes in the metadnode. * Add test large_dnode_009_pos.ksh, which performs concurrent creates and deletes, to complement existing test which does only creates. With the above fixes, there is very little contention in a test of about 200,000 racing dnode allocations produced by tests 'large_dnode_008_pos' and 'large_dnode_009_pos'. name type data dnode_hold_dbuf_hold 4 0 dnode_hold_dbuf_read 4 0 dnode_hold_alloc_hits 4 3804690 dnode_hold_alloc_misses 4 216 dnode_hold_alloc_interior 4 3 dnode_hold_alloc_lock_retry 4 0 dnode_hold_alloc_lock_misses 4 0 dnode_hold_alloc_type_none 4 0 dnode_hold_free_hits 4 203105 dnode_hold_free_misses 4 4 dnode_hold_free_lock_misses 4 0 dnode_hold_free_lock_retry 4 0 dnode_hold_free_overflow 4 0 dnode_hold_free_refcount 4 57 dnode_hold_free_txg 4 0 dnode_allocate 4 203154 dnode_reallocate 4 0 dnode_buf_evict 4 23918 dnode_alloc_next_chunk 4 4887 dnode_alloc_race 4 0 dnode_alloc_next_block 4 18 The performance is slightly improved for concurrent creates with 16+ threads, and unchanged for low thread counts. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Olaf Faaland <[email protected]> Closes #5396 Closes #6522 Closes #6414 Closes #6564
* Native Encryption for ZFS on LinuxTom Caputi2017-08-142-21/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change incorporates three major pieces: The first change is a keystore that manages wrapping and encryption keys for encrypted datasets. These commands mostly involve manipulating the new DSL Crypto Key ZAP Objects that live in the MOS. Each encrypted dataset has its own DSL Crypto Key that is protected with a user's key. This level of indirection allows users to change their keys without re-encrypting their entire datasets. The change implements the new subcommands "zfs load-key", "zfs unload-key" and "zfs change-key" which allow the user to manage their encryption keys and settings. In addition, several new flags and properties have been added to allow dataset creation and to make mounting and unmounting more convenient. The second piece of this patch provides the ability to encrypt, decyrpt, and authenticate protected datasets. Each object set maintains a Merkel tree of Message Authentication Codes that protect the lower layers, similarly to how checksums are maintained. This part impacts the zio layer, which handles the actual encryption and generation of MACs, as well as the ARC and DMU, which need to be able to handle encrypted buffers and protected data. The last addition is the ability to do raw, encrypted sends and receives. The idea here is to send raw encrypted and compressed data and receive it exactly as is on a backup system. This means that the dataset on the receiving system is protected using the same user key that is in use on the sending side. By doing so, datasets can be efficiently backed up to an untrusted system without fear of data being compromised. Reviewed by: Matthew Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Jorgen Lundman <[email protected]> Signed-off-by: Tom Caputi <[email protected]> Closes #494 Closes #5769
* Add libtpool (thread pools)Brian Behlendorf2017-08-091-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OpenZFS provides a library called tpool which implements thread pools for user space applications. Porting this library means the zpool utility no longer needs to borrow the kernel mutex and taskq interfaces from libzpool. This code was updated to use the tpool library which behaves in a very similar fashion. Porting libtpool was relatively straight forward and minimal modifications were needed. The core changes were: * Fully convert the library to use pthreads. * Updated signal handling. * lmalloc/lfree converted to calloc/free * Implemented portable pthread_attr_clone() function. Finally, update the build system such that libzpool.so is no longer linked in to zfs(8), zpool(8), etc. All that is required is libzfs to which the zcommon soures were added (which is the way it always should have been). Removing the libzpool dependency resulted in several build issues which needed to be resolved. * Moved zfeature support to module/zcommon/zfeature_common.c * Moved ratelimiting to to module/zfs/zfs_ratelimit.c * Moved get_system_hostid() to lib/libspl/gethostid.c * Removed use of cmn_err() in zcommon source * Removed dprintf_setup() call from zpool_main.c and zfs_main.c * Removed highbit() and lowbit() * Removed unnecessary library dependencies from Makefiles * Removed fletcher-4 kstat in user space * Added sha2 support explicitly to libzfs * Added highbit64() and lowbit64() to zpool_util.c Reviewed-by: Tony Hutter <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #6442
* Multi-modifier protection (MMP)Olaf Faaland2017-07-131-91/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add multihost=on|off pool property to control MMP. When enabled a new thread writes uberblocks to the last slot in each label, at a set frequency, to indicate to other hosts the pool is actively imported. These uberblocks are the last synced uberblock with an updated timestamp. Property defaults to off. During tryimport, find the "best" uberblock (newest txg and timestamp) repeatedly, checking for change in the found uberblock. Include the results of the activity test in the config returned by tryimport. These results are reported to user in "zpool import". Allow the user to control the period between MMP writes, and the duration of the activity test on import, via a new module parameter zfs_multihost_interval. The period is specified in milliseconds. The activity test duration is calculated from this value, and from the mmp_delay in the "best" uberblock found initially. Add a kstat interface to export statistics about Multiple Modifier Protection (MMP) updates. Include the last synced txg number, the timestamp, the delay since the last MMP update, the VDEV GUID, the VDEV label that received the last MMP update, and the VDEV path. Abbreviated output below. $ cat /proc/spl/kstat/zfs/mypool/multihost 31 0 0x01 10 880 105092382393521 105144180101111 txg timestamp mmp_delay vdev_guid vdev_label vdev_path 20468 261337 250274925 68396651780 3 /dev/sda 20468 261339 252023374 6267402363293 1 /dev/sdc 20468 261340 252000858 6698080955233 1 /dev/sdx 20468 261341 251980635 783892869810 2 /dev/sdy 20468 261342 253385953 8923255792467 3 /dev/sdd 20468 261344 253336622 042125143176 0 /dev/sdab 20468 261345 253310522 1200778101278 2 /dev/sde 20468 261346 253286429 0950576198362 2 /dev/sdt 20468 261347 253261545 96209817917 3 /dev/sds 20468 261349 253238188 8555725937673 3 /dev/sdb Add a new tunable zfs_multihost_history to specify the number of MMP updates to store history for. By default it is set to zero meaning that no MMP statistics are stored. When using ztest to generate activity, for automated tests of the MMP function, some test functions interfere with the test. For example, the pool is exported to run zdb and then imported again. Add a new ztest function, "-M", to alter ztest behavior to prevent this. Add new tests to verify the new functionality. Tests provided by Giuseppe Di Natale. Reviewed by: Matthew Ahrens <[email protected]> Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed-by: Ned Bass <[email protected]> Reviewed-by: Andreas Dilger <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Olaf Faaland <[email protected]> Closes #745 Closes #6279
* OpenZFS 8067 - zdb should be able to dump literal embedded block pointerMatthew Ahrens2017-07-071-3/+51
| | | | | | | | | | | | | | Authored by: Matthew Ahrens <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Alex Reece <[email protected]> Reviewed by: Yuri Pankov <[email protected]> Approved by: Robert Mustacchi <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Ported-by: Giuseppe Di Natale <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/8067 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/8173085 Closes #6319
* Add zfs_nicebytes() to print human-readable sizesLOLi2017-05-021-1/+1
| | | | | | | | | | | | | | | | * Add zfs_nicebytes() to print human-readable sizes Some 'zfs', 'zpool' and 'zdb' output strings can be confusing to the user when no units are specified. This add a new zfs_nicenum_format "ZFS_NICENUM_BYTES" used to print bytes in their human-readable form. Additionally, update some test cases to use machine-parsable 'zfs get'. Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: loli10K <[email protected]> Closes #2414 Closes #3185 Closes #3594 Closes #6032
* OpenZFS 6392 - zdb: introduce -V for verbatim importRichard Yao2017-04-181-9/+9
| | | | | | | | | | | | | | | | | | | | Authored by: Richard Yao <[email protected]> Approved by: Dan McDonald <[email protected]> Reviewed by: Yuri Pankov <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Pavel Zakharov <[email protected]> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Ported-by: Giuseppe Di Natale <[email protected]> Porting Notes: This was already implemented in ZFS on Linux. This patch is to resolved the deltas present in our version. OpenZFS-issue: https://www.illumos.org/issues/6392 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/9bb97de Closes #6020
* OpenZFS 7900 - zdb shouldn't print the path of a znode at verbosity < 5Alan Somers2017-04-141-12/+8
| | | | | | | | | | | | | | | | | | | | Authored by: Alan Somers <[email protected]> Approved by: Dan McDonald <[email protected]> Reviewed by: Paul Dagnelie <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Ported-by: Giuseppe Di Natale <[email protected]> There are two reasons: 1) Finding a znode's path is slower than printing any other znode information at verbosity < 5. 2) On a corrupted pool like the one mentioned below, zdb will crash when it tries to determine the znode's path. But with this patch, zdb can still extract useful information from such pools. OpenZFS-issue: https://www.illumos.org/issues/7900 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/2b0dee1 Closes #6016
* OpenZFS 6410 - teach zdb to perform object lookups by pathBrian Behlendorf2017-04-131-89/+236
| | | | | | | | | | | | | | | | | | Authored by: Yuri Pankov <[email protected]> Reviewed by: Pavel Zakharov <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: Will Andrews <[email protected]> Approved by: Dan McDonald <[email protected]> Ported-by: Brian Behlendorf <[email protected]> Porting Notes: - Replaced zdb.8 with upstream mdoc zdb.1m version. Updated to include Linux specific features: -V verbatium imports and improved label printing (-u, and -l). - Minor changes to `zdb -h` output to honor 80 character limit. OpenZFS-issue: https://www.illumos.org/issues/6410 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ed61ec1 Closes #6006
* OpenZFS 6865 - want zfs-tests cases for zpool labelclear commandYuri Pankov2017-04-111-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Authored by: Yuri Pankov <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: John Kennedy <[email protected]> Approved by: Robert Mustacchi <[email protected]> Reviewed-by: loli10K <[email protected]> Ported-by: Brian Behlendorf <[email protected]> Porting Notes: - Updated 'zpool labelclear' and 'zdb -l' such that they attempt to find a vdev given solely its short name. This behavior is consistent with the upstream OpenZFS code and the test cases depend on it. The actual implementation differs slightly due to device naming conventions on Linux. - auto_online_001_pos, auto_replace_001_pos and add-o_ashift test cases updated to expect failure when no label exists. - read_efi_label() and zpool_label_disk_check() are read-only operations and should use O_RDONLY at open time to enforce this. - zpool_label_disk() and zpool_relabel_disk() write the partition information using O_DIRECT an fsync() and page cache invalidation to ensure a consistent view of the device. - dump_label() in zdb should invalidate the page cache in order to get the authoritative label from disk. OpenZFS-issue: https://www.illumos.org/issues/6865 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c95076c Closes #5981
* Remove dependency on linear ABDGvozden Neskovic2017-03-291-2/+7
| | | | | | | | | | | | | | | Wherever possible it's best to avoid depending on a linear ABD. Update the code accordingly in the following areas. - vdev_raidz - zio, zio_checksum - zfs_fm - change abd_alloc_for_io() to use abd_alloc() Reviewed-by: David Quigley <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Gvozden Neskovic <[email protected]> Closes #5668
* Dump unique configurations and Uberblocks in zdb -luOlaf Faaland2017-03-061-58/+263
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For zdb -l, detect when the configuration nvlist in some label l (l>0) is the same as a configuration already dumped. If so, do not dump it. Make a similar check when dumping Uberblocks for zdb -lu. Check whether a label already dumped contains an identical Uberblock. If so, do not dump the Uberblock. When dumping a configuration or Uberblock, state which labels it is found in (0-3), for example: labels = 1 2 3 Detecting redundant uberblocks or configurations is accomplished by calculating checksums of the uberblocks and the packed nvlists containing the configuration. If there is nothing unique to be dumped for a label (ie the configuration and uberblocks have checksums matching those already dumped) print nothing for that label. With additional l's or u's, increase verbosity as follows: -l Dump each unique configuration only once. Indicate which labels it appears in. -ll In addition, dump label space usage stats. -lll Dump every configuration, unique or not. -u Dump each unique, valid, uberblock only once. Indicate which labels it appears in. -uu In addition, state which slots are invalid. -uuu Dump every uberblock, unique or not. -uuuu Dump the uberblock blockpointer (used to be -uuu) Make exit values conform to the manual page. Failing to unpack a configuration nvlist is considered an error, as well as failing to open or read from the device. Add three tests, zdb_00{3,4,5}_pos to verify the above functionality. An example of the output: ------------------------------------ LABEL 0 ------------------------------------ version: 5000 name: 'pool' state: 1 txg: 880 < ... redacted ... > features_for_read: com.delphix:hole_birth com.delphix:embedded_data labels = 0 Uberblock[0] magic = 0000000000bab10c version = 5000 txg = 0 guid_sum = 3038694082047428541 timestamp = 1487715500 UTC = Tue Feb 21 14:18:20 2017 labels = 0 1 2 3 Uberblock[4] magic = 0000000000bab10c version = 5000 txg = 772 guid_sum = 9045970794941528051 timestamp = 1487727291 UTC = Tue Feb 21 17:34:51 2017 labels = 0 < ... redacted ... > ------------------------------------ LABEL 1 ------------------------------------ version: 5000 name: 'pool' state: 1 txg: 14 < ... redacted ... > com.delphix:embedded_data labels = 1 2 3 Uberblock[4] magic = 0000000000bab10c version = 5000 txg = 4 guid_sum = 7793930272573252584 timestamp = 1487727521 UTC = Tue Feb 21 17:38:41 2017 labels = 1 2 3 < ... redacted ... > Reviewed-by: Tim Chase <[email protected]> Reviewed-by: Don Brady <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Olaf Faaland <[email protected]> Closes #5738
* Fix coverity defects: CID 155928Don Brady2017-02-071-8/+11
| | | | | | | | | | | | | CID 155928: Integer handling issues (DIVIDE_BY_ZERO) In the current vdev label, the leaf count is always non-zero but it doesn't hurt to check the count for future proofing. Reviewed-by: George Melikov <[email protected]> Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Don Brady <[email protected]> Closes #5749
* OpenZFS 6866 - zdb -l non-zero status if no labelDon Brady2017-02-031-30/+32
| | | | | | | | | | | | | | | | Authored by: Yuri Pankov <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: John Kennedy <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: George Melikov <[email protected]> Ported-by: Don Brady <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/6866 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/3e4fae5 Closes #5730 Porting Notes: - Omitted the illumos-specific `/dev/dsk` and `/dev/rdsk` path conversions since they don't apply on linux.
* Add nvlist payload stats for zdb -ll dumpDon Brady2017-02-021-1/+133
| | | | | | | | | | When dumping the ZFS vdev label with 'zdb -ll', also dump some nvlist stats to help analyze the current footprint. Reviewed-by: Tony Hutter <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Don Brady <[email protected]> Closes #5724
* Remove lint ifdef checks in zdb and dbufGiuseppe Di Natale2017-02-011-7/+0
| | | | | | | | | | This is effectively dead code for the Linux implementation which can be removed to improve readability. We want to linter to check the real production/debug build as much as possible. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: George Melikov <[email protected]> Signed-off-by: Giuseppe Di Natale <[email protected]> Closes #5722
* OpenZFS 7545 - zdb should disable reference trackingGiuseppe Di Natale2017-01-311-0/+7
| | | | | | | | | | | | | | | | Authored by: Matthew Ahrens <[email protected]> Reviewed by: Dan Kimmel <[email protected]> Reviewed by: Steve Gonczi <[email protected]> Reviewed by: George Wilson <[email protected]> Approved by: Robert Mustacchi <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Ported-by: Giuseppe Di Natale <[email protected]> Porting Notes: Moved reference_tracking_enable and reference_history outside of ZFS_DEBUG. OpenZFS-issue: https://www.illumos.org/issues/7545 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/4dd77f9 Closes #5701
* OpenZFS 7280 - Allow changing global libzpool variables in zdb and ztest ↵George Melikov2017-01-311-5/+13
| | | | | | | | | | | | | | through command line Authored by: Pavel Zakharov <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: Dan Kimmel <[email protected]> Approved by: Robert Mustacchi <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Ported-by: George Melikov <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/7280 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/0e60744 Closes #5676
* OpenZFS 7277 - zdb should be able to print zfs_dbgmsg'sGeorge Melikov2017-01-281-4/+22
| | | | | | | | | | | | | | | | | | Porting notes: - 'zfs_dbgmsg_print()' reintroduced to userspace. Authored by: Pavel Zakharov <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: Igor Kozhukhov <[email protected]> Approved by: Dan McDonald <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Giuseppe Di Natale <[email protected]> Ported-by: George Melikov <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/7277 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/29bdd2f Closes #5684
* OpenZFS 6880 - zdb incorrectly reports feature count mismatch when feature ↵George Melikov2017-01-241-1/+2
| | | | | | | | | | | | | | is disabled Authored by: Matthew Ahrens <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Prakash Surya <[email protected]> Approved by: Robert Mustacchi <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Ported-by: George Melikov <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/6880 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/c5d1600 Closes #5641
* Fix unused variable warningBrian Behlendorf2017-01-191-2/+2
| | | | | | | | | The local mg variable is unused in non-debug builds. Wrap the variable in ASSERTV() so that it's only present in the debug build. Introduced by OpenZFS 7303. Reviewed-by: Don Brady <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #5616
* OpenZFS 7303 - dynamic metaslab selectionDon Brady2017-01-121-4/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | This change introduces a new weighting algorithm to improve metaslab selection. The new weighting algorithm relies on the SPACEMAP_HISTOGRAM feature. As a result, the metaslab weight now encodes the type of weighting algorithm used (size-based vs segment-based). Porting Notes: The metaslab allocation tracing code is conditionally removed on linux (dependent on mdb debugger). Authored by: George Wilson <[email protected]> Reviewed by: Alex Reece <[email protected]> Reviewed by: Chris Siden <[email protected]> Reviewed by: Dan Kimmel <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: Paul Dagnelie <[email protected]> Reviewed by: Pavel Zakharov [email protected] Reviewed by: Prakash Surya <[email protected]> Reviewed by: Don Brady <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Ported-by: Don Brady <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/7303 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/d5190931bd Closes #5404
* Fix spellingka72017-01-031-2/+2
| | | | | | | | | Reviewed-by: Brian Behlendorf <[email protected] Reviewed-by: Giuseppe Di Natale <[email protected]>> Reviewed-by: George Melikov <[email protected]> Reviewed-by: Haakan T Johansson <[email protected]> Closes #5547 Closes #5543
* Use cstyle -cpP in `make cstyle` checkBrian Behlendorf2016-12-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Enable picky cstyle checks and resolve the new warnings. The vast majority of the changes needed were to handle minor issues with whitespace formatting. This patch contains no functional changes. Non-whitespace changes are as follows: * 8 times ; to { } in for/while loop * fix missing ; in cmd/zed/agents/zfs_diagnosis.c * comment (confim -> confirm) * change endline , to ; in cmd/zpool/zpool_main.c * a number of /* BEGIN CSTYLED */ /* END CSTYLED */ blocks * /* CSTYLED */ markers * change == 0 to ! * ulong to unsigned long in module/zfs/dsl_scan.c * rearrangement of module_param lines in module/zfs/metaslab.c * add { } block around statement after for_each_online_node Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed-by: Håkan Johansson <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #5465
* DLPX-44812 integrate EP-220 large memory scalabilityDavid Quigley2016-11-292-45/+62
|
* Fix coverity defects: CID 147511, 147513cao2016-10-241-1/+1
| | | | | | | | CID 147511: Type:Dereference before null check CID 147513: Type:Dereference before null check Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: cao.xuewen <[email protected]> Closes #5306
* Add support for user/group dnode accounting & quotaJinshan Xiong2016-10-071-1/+3
| | | | | | | | | | | | | | | | This patch tracks dnode usage for each user/group in the DMU_USER/GROUPUSED_OBJECT ZAPs. ZAP entries dedicated to dnode accounting have the key prefixed with "obj-" followed by the UID/GID in string format (as done for the block accounting). A new SPA feature has been added for dnode accounting as well as a new ZPL version. The SPA feature must be enabled in the pool before upgrading the zfs filesystem. During the zfs version upgrade, a "quotacheck" will be executed by marking all dnode as dirty. ZoL-bug-id: https://github.com/zfsonlinux/zfs/issues/3500 Signed-off-by: Jinshan Xiong <[email protected]> Signed-off-by: Johann Lombardi <[email protected]>
* OpenZFS 6950 - ARC should cache compressed dataGeorge Wilson2016-09-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Authored by: George Wilson <[email protected]> Reviewed by: Prakash Surya <[email protected]> Reviewed by: Dan Kimmel <[email protected]> Reviewed by: Matt Ahrens <[email protected]> Reviewed by: Paul Dagnelie <[email protected]> Reviewed by: Tom Caputi <[email protected]> Reviewed by: Brian Behlendorf <[email protected]> Ported by: David Quigley <[email protected]> This review covers the reading and writing of compressed arc headers, sharing data between the arc_hdr_t and the arc_buf_t, and the implementation of a new dbuf cache to keep frequently access data uncompressed. I've added a new member to l1 arc hdr called b_pdata. The b_pdata always hangs off the arc_buf_hdr_t (if an L1 hdr is in use) and points to the physical block for that DVA. The physical block may or may not be compressed. If compressed arc is enabled and the block on-disk is compressed, then the b_pdata will match the block on-disk and remain compressed in memory. If the block on disk is not compressed, then neither will the b_pdata. Lastly, if compressed arc is disabled, then b_pdata will always be an uncompressed version of the on-disk block. Typically the arc will cache only the arc_buf_hdr_t and will aggressively evict any arc_buf_t's that are no longer referenced. This means that the arc will primarily have compressed blocks as the arc_buf_t's are considered overhead and are always uncompressed. When a consumer reads a block we first look to see if the arc_buf_hdr_t is cached. If the hdr is cached then we allocate a new arc_buf_t and decompress the b_pdata contents into the arc_buf_t's b_data. If the hdr already has a arc_buf_t, then we will allocate an additional arc_buf_t and bcopy the uncompressed contents from the first arc_buf_t to the new one. Writing to the compressed arc requires that we first discard the b_pdata since the physical block is about to be rewritten. The new data contents will be passed in via an arc_buf_t (uncompressed) and during the I/O pipeline stages we will copy the physical block contents to a newly allocated b_pdata. When an l2arc is inuse it will also take advantage of the b_pdata. Now the l2arc will always write the contents of b_pdata to the l2arc. This means that when compressed arc is enabled that the l2arc blocks are identical to those stored in the main data pool. This provides a significant advantage since we can leverage the bp's checksum when reading from the l2arc to determine if the contents are valid. If the compressed arc is disabled, then we must first transform the read block to look like the physical block in the main data pool before comparing the checksum and determining it's valid. OpenZFS-issue: https://www.illumos.org/issues/6950 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/7fc10f0 Issue #5078
* zdb: fencepost error at zdb_cb.zcb_embedded_histogram[][]Gvozden Neskovic2016-08-161-1/+1
| | | | | | | | | | | | Erroneous access detected by gcc UndefinedBehaviorSanitizer: `zdb.c:2424:7: runtime error: index 112 out of bounds for type 'uint64_t [112]'` Fix: increase histogram size by 1 to accommodate all possible sizes. Signed-off-by: Gvozden Neskovic <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #4934 Issue #4883
* Fix infinite loop when zdb -R with d flagChunwei Chen2016-08-111-2/+2
| | | | | | | | | Also print decompress progress to stderr so it wouldn't pollute raw output with r flag. Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #4956
* Fix gcc -Warray-bounds check for dump_object() in zdbBrian Behlendorf2016-08-021-5/+14
| | | | | | | | | | | | | | | | As of gcc 6.1.1 20160621 (Red Hat 6.1.1-3) an array bounds warnings is detected in the zdb the dump_object() function. The analysis is correct but difficult to interpret because this is implemented as a macro. Rework the ZDB_OT_NAME in to a function and remove the case detected by gcc which is a side effect of the DMU_OT_IS_VALID() macro. zdb.c: In function ‘dump_object’: zdb.c:1931:288: error: array subscript is outside array bounds [-Werror=array-bounds] Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Gvozden Neskovic <[email protected]> Closes #4907
* Fix zdb crash with 4K-only devicesBrian Behlendorf2016-07-271-3/+13
| | | | | | | | | | | | Here's the problem - on 4K native devices in userland on Linux using O_DIRECT, buffers must be 4K aligned or I/O will fail with EINVAL, causing zdb (and others) to coredump. Since userland probably doesn't need optimized buffer caches, we just force 4K alignment on everything. Signed-off-by: Brian Behlendorf <[email protected]> Signed-off-by: Gvozden Neskovic <[email protected]> Closes #4479
* OpenZFS 6314 - buffer overflow in dsl_dataset_nameIgor Kozhukhov2016-06-281-3/+2
| | | | | | | | | | | Reviewed by: George Wilson <[email protected]> Reviewed by: Prakash Surya <[email protected]> Reviewed by: Igor Kozhukhov <[email protected]> Approved by: Dan McDonald <[email protected]> Ported-by: Brian Behlendorf <[email protected]> OpenZFS-issue: https://www.illumos.org/issues/6314 OpenZFS-commit: https://github.com/openzfs/openzfs/commit/d6160ee
* Implement large_dnode pool featureNed Bass2016-06-242-10/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Justification ------------- This feature adds support for variable length dnodes. Our motivation is to eliminate the overhead associated with using spill blocks. Spill blocks are used to store system attribute data (i.e. file metadata) that does not fit in the dnode's bonus buffer. By allowing a larger bonus buffer area the use of a spill block can be avoided. Spill blocks potentially incur an additional read I/O for every dnode in a dnode block. As a worst case example, reading 32 dnodes from a 16k dnode block and all of the spill blocks could issue 33 separate reads. Now suppose those dnodes have size 1024 and therefore don't need spill blocks. Then the worst case number of blocks read is reduced to from 33 to two--one per dnode block. In practice spill blocks may tend to be co-located on disk with the dnode blocks so the reduction in I/O would not be this drastic. In a badly fragmented pool, however, the improvement could be significant. ZFS-on-Linux systems that make heavy use of extended attributes would benefit from this feature. In particular, ZFS-on-Linux supports the xattr=sa dataset property which allows file extended attribute data to be stored in the dnode bonus buffer as an alternative to the traditional directory-based format. Workloads such as SELinux and the Lustre distributed filesystem often store enough xattr data to force spill bocks when xattr=sa is in effect. Large dnodes may therefore provide a performance benefit to such systems. Other use cases that may benefit from this feature include files with large ACLs and symbolic links with long target names. Furthermore, this feature may be desirable on other platforms in case future applications or features are developed that could make use of a larger bonus buffer area. Implementation -------------- The size of a dnode may be a multiple of 512 bytes up to the size of a dnode block (currently 16384 bytes). A dn_extra_slots field was added to the current on-disk dnode_phys_t structure to describe the size of the physical dnode on disk. The 8 bits for this field were taken from the zero filled dn_pad2 field. The field represents how many "extra" dnode_phys_t slots a dnode consumes in its dnode block. This convention results in a value of 0 for 512 byte dnodes which preserves on-disk format compatibility with older software. Similarly, the in-memory dnode_t structure has a new dn_num_slots field to represent the total number of dnode_phys_t slots consumed on disk. Thus dn->dn_num_slots is 1 greater than the corresponding dnp->dn_extra_slots. This difference in convention was adopted because, unlike on-disk structures, backward compatibility is not a concern for in-memory objects, so we used a more natural way to represent size for a dnode_t. The default size for newly created dnodes is determined by the value of a new "dnodesize" dataset property. By default the property is set to "legacy" which is compatible with older software. Setting the property to "auto" will allow the filesystem to choose the most suitable dnode size. Currently this just sets the default dnode size to 1k, but future code improvements could dynamically choose a size based on observed workload patterns. Dnodes of varying sizes can coexist within the same dataset and even within the same dnode block. For example, to enable automatically-sized dnodes, run # zfs set dnodesize=auto tank/fish The user can also specify literal values for the dnodesize property. These are currently limited to powers of two from 1k to 16k. The power-of-2 limitation is only for simplicity of the user interface. Internally the implementation can handle any multiple of 512 up to 16k, and consumers of the DMU API can specify any legal dnode value. The size of a new dnode is determined at object allocation time and stored as a new field in the znode in-memory structure. New DMU interfaces are added to allow the consumer to specify the dnode size that a newly allocated object should use. Existing interfaces are unchanged to avoid having to update every call site and to preserve compatibility with external consumers such as Lustre. The new interfaces names are given below. The versions of these functions that don't take a dnodesize parameter now just call the _dnsize() versions with a dnodesize of 0, which means use the legacy dnode size. New DMU interfaces: dmu_object_alloc_dnsize() dmu_object_claim_dnsize() dmu_object_reclaim_dnsize() New ZAP interfaces: zap_create_dnsize() zap_create_norm_dnsize() zap_create_flags_dnsize() zap_create_claim_norm_dnsize() zap_create_link_dnsize() The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The spa_maxdnodesize() function should be used to determine the maximum bonus length for a pool. These are a few noteworthy changes to key functions: * The prototype for dnode_hold_impl() now takes a "slots" parameter. When the DNODE_MUST_BE_FREE flag is set, this parameter is used to ensure the hole at the specified object offset is large enough to hold the dnode being created. The slots parameter is also used to ensure a dnode does not span multiple dnode blocks. In both of these cases, if a failure occurs, ENOSPC is returned. Keep in mind, these failure cases are only possible when using DNODE_MUST_BE_FREE. If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0. dnode_hold_impl() will check if the requested dnode is already consumed as an extra dnode slot by an large dnode, in which case it returns ENOENT. * The function dmu_object_alloc() advances to the next dnode block if dnode_hold_impl() returns an error for a requested object. This is because the beginning of the next dnode block is the only location it can safely assume to either be a hole or a valid starting point for a dnode. * dnode_next_offset_level() and other functions that iterate through dnode blocks may no longer use a simple array indexing scheme. These now use the current dnode's dn_num_slots field to advance to the next dnode in the block. This is to ensure we properly skip the current dnode's bonus area and don't interpret it as a valid dnode. zdb --- The zdb command was updated to display a dnode's size under the "dnsize" column when the object is dumped. For ZIL create log records, zdb will now display the slot count for the object. ztest ----- Ztest chooses a random dnodesize for every newly created object. The random distribution is more heavily weighted toward small dnodes to better simulate real-world datasets. Unused bonus buffer space is filled with non-zero values computed from the object number, dataset id, offset, and generation number. This helps ensure that the dnode traversal code properly skips the interior regions of large dnodes, and that these interior regions are not overwritten by data belonging to other dnodes. A new test visits each object in a dataset. It verifies that the actual dnode size matches what was stored in the ztest block tag when it was created. It also verifies that the unused bonus buffer space is filled with the expected data patterns. ZFS Test Suite -------------- Added six new large dnode-specific tests, and integrated the dnodesize property into existing tests for zfs allow and send/recv. Send/Receive ------------ ZFS send streams for datasets containing large dnodes cannot be received on pools that don't support the large_dnode feature. A send stream with large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be unrecognized by an incompatible receiving pool so that the zfs receive will fail gracefully. While not implemented here, it may be possible to generate a backward-compatible send stream from a dataset containing large dnodes. The implementation may be tricky, however, because the send object record for a large dnode would need to be resized to a 512 byte dnode, possibly kicking in a spill block in the process. This means we would need to construct a new SA layout and possibly register it in the SA layout object. The SA layout is normally just sent as an ordinary object record. But if we are constructing new layouts while generating the send stream we'd have to build the SA layout object dynamically and send it at the end of the stream. For sending and receiving between pools that do support large dnodes, the drr_object send record type is extended with a new field to store the dnode slot count. This field was repurposed from unused padding in the structure. ZIL Replay ---------- The dnode slot count is stored in the uppermost 8 bits of the lr_foid field. The bits were unused as the object id is currently capped at 48 bits. Resizing Dnodes --------------- It should be possible to resize a dnode when it is dirtied if the current dnodesize dataset property differs from the dnode's size, but this functionality is not currently implemented. Clearly a dnode can only grow if there are sufficient contiguous unused slots in the dnode block, but it should always be possible to shrink a dnode. Growing dnodes may be useful to reduce fragmentation in a pool with many spill blocks in use. Shrinking dnodes may be useful to allow sending a dataset to a pool that doesn't support the large_dnode feature. Feature Reference Counting -------------------------- The reference count for the large_dnode pool feature tracks the number of datasets that have ever contained a dnode of size larger than 512 bytes. The first time a large dnode is created in a dataset the dataset is converted to an extensible dataset. This is a one-way operation and the only way to decrement the feature count is to destroy the dataset, even if the dataset no longer contains any large dnodes. The complexity of reference counting on a per-dnode basis was too high, so we chose to track it on a per-dataset basis similarly to the large_block feature. Signed-off-by: Ned Bass <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #3542
* Consistently use parsable instead of parseableChrister Ekholm2016-05-231-1/+1
| | | | | | | | | This is a purely cosmetical change, to consistently prefer one of two (both acceptable) choises for the word parsable in documentation and code. I don't really care which to use, but acording to wiktionary https://en.wiktionary.org/wiki/parsable#English parsable is preferred. Signed-off-by: Brian Behlendorf <[email protected]> Closes #4682
* Cleanup linkingRichard Yao2016-03-181-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I noticed during code review of zfsonlinux/zfs#4385 that the author of a commit had peppered the various Makefile.am files with `$(TIRPC_LIBS)` when putting it into `lib/libspl/Makefile.am` should have sufficed. Upon further examination, it seems that he had copied what we do with `$(ZLIB)`. We also have a bit of that with `-ldl` too. Unfortunately, what we do is wrong, so lets fix it to set a good example for future contributors. In addition, we have multiple `-lz` and `-luuid` passed to the compiler because each `AC_CHECK_LIB` adds it to `$LIBS`. That is somewhat annoying to see, so we switch to `AC_SEARCH_LIBS` to avoid it. This is consistent with the recommendation to use `AC_SEARCH_LIBS` over `AC_CHECK_LIB` by autotools upstream: https://www.gnu.org/software/autoconf/manual/autoconf-2.66/html_node/Libraries.html In an ideal world, this would translate into improvements in ELF's `DT_NEEDED` entries, but that is not the case because of a couple of bugs in libtool. The first bug causes libtool to overlink by using static link dependencies for dynamic linking: https://wiki.mageia.org/en/Overlinking_issues_in_packaging#libtool_issues The workaround for this should be to pass `-Wl,--as-needed` in `LDFLAGS`. That leads us to the second bug, where libtool passes `LDFLAGS` after the libraries are specified and `ld` will only honor `--as-needed` on libraries specified before it: https://sigquit.wordpress.com/2011/02/16/why-asneeded-doesnt-work-as-expected-for-your-libraries-on-your-autotools-project/ There are a few possible workarounds for the second bug. One is to either patch the compiler spec file to specify `-Wl,--as-needed` or pass `-Wl,--as-needed` via `CC` like `CC='gcc -Wl,--as-needed'` so that it is specified early. Another is to patch ltmain.sh like Gentoo does: https://gitweb.gentoo.org/repo/gentoo.git/tree/eclass/ELT-patches/as-needed Without one of those workarounds, this cleanup provides no benefit in terms of `DT_NEEDED` entry generation. It should still be an improvement because it nicely simplifies the code while encouraging good habits when patching autotools scripts. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #4426
* Correctly parse -R flag argumentsTim Chase2016-02-051-1/+3
| | | | | | | | | | Currently, only the 'b' flag takes an argument which is an offset into the block at which a blkptr should be decoded. The index into the flag string needed to be updated after parsing an argument. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #4304
* Illumos 4891 - want zdb option to dump all metadataMatthew Ahrens2016-01-111-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4891 want zdb option to dump all metadata Reviewed by: Sonu Pillai <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Christopher Siden <[email protected]> Reviewed by: Dan McDonald <[email protected]> Reviewed by: Richard Lowe <[email protected]> Approved by: Garrett D'Amore <[email protected]> We'd like a way for zdb to dump metadata in a machine-readable format, so that we can bring that back from a customer site for in-house diagnosis. Think of it as a crash dump for zpools, which can be used for post-mortem analysis of a malfunctioning pool References: https://www.illumos.org/issues/4891 https://github.com/illumos/illumos-gate/commit/df15e41 Porting notes: - [cmd/zdb/zdb.c] - a5778ea zdb: Introduce -V for verbatim import - In main() getopt 'opt' variable removed and the code was brought back in line with illumos. - [lib/libzpool/kernel.c] - 1e33ac1 Fix Solaris thread dependency by using pthreads - f0e324f Update utsname support - 4d58b69 Fix vn_open/vn_rdwr error handling - In vn_open() allocate 'dumppath' on heap instead of stack - Properly handle 'dump_fd == -1' error path - Free 'realpath' after added vn_dumpdir_code block Ported-by: kernelOfTruth [email protected] Signed-off-by: Brian Behlendorf <[email protected]>
* Illumos 5960, 5925Paul Dagnelie2016-01-081-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5960 zfs recv should prefetch indirect blocks 5925 zfs receive -o origin= Reviewed by: Prakash Surya <[email protected]> Reviewed by: Matthew Ahrens <[email protected]> References: https://www.illumos.org/issues/5960 https://www.illumos.org/issues/5925 https://github.com/illumos/illumos-gate/commit/a2cdcdd Porting notes: - [lib/libzfs/libzfs_sendrecv.c] - b8864a2 Fix gcc cast warnings - 325f023 Add linux kernel device support - 5c3f61e Increase Linux pipe buffer size on 'zfs receive' - [module/zfs/zfs_vnops.c] - 3558fd7 Prototype/structure update for Linux - c12e3a5 Restructure zfs_readdir() to fix regressions - [module/zfs/zvol.c] - Function @zvol_map_block() isn't needed in ZoL - 9965059 Prefetch start and end of volumes - [module/zfs/dmu.c] - Fixed ISO C90 - mixed declarations and code - Function dmu_prefetch() 'int i' is initialized before the following code block (c90 vs. c99) - [module/zfs/dbuf.c] - fc5bb51 Fix stack dbuf_hold_impl() - 9b67f60 Illumos 4757, 4913 - 34229a2 Reduce stack usage for recursive traverse_visitbp() - [module/zfs/dmu_send.c] - Fixed ISO C90 - mixed declarations and code - b58986e Use large stacks when available - 241b541 Illumos 5959 - clean up per-dataset feature count code - 77aef6f Use vmem_alloc() for nvlists - 00b4602 Add linux kernel memory support Ported-by: kernelOfTruth [email protected] Signed-off-by: Brian Behlendorf <[email protected]>
* Add missing -V option to zdbBrian Behlendorf2016-01-081-2/+2
| | | | | | | Add missing getopt specifier for `zdb -V` verbatim option and set flag with correct bitwise operator. Signed-off-by: Brian Behlendorf <[email protected]>
* Illumos 3604 - zdb should print bpobjs more verbosely (fix zdb hang)Matthew Ahrens2016-01-051-0/+1
| | | | | | | | | | | | | | | | 3604 zdb should print bpobjs more verbosely (fix zdb hang) References: https://github.com/illumos/illumos-gate/commit/7706186 https://www.illumos.org/issues/3604 https://lists.freebsd.org/pipermail/svn-src-vendor/2015-August/002411.html https://lists.freebsd.org/pipermail/svn-src-head/2015-August/075195.html Porting notes: In ZoL "5810 zdb should print details of bpobj" was merged prior to this change so it must be applied to the new location. Ported-by: kernelOfTruth [email protected] Signed-off-by: Brian Behlendorf <[email protected]>
* Unconditionally build zdb and ztest with -DDEBUGRichard Yao2015-12-141-0/+2
| | | | | | | | | | | | | | | | | Illumos unconditionally builds zdb and ztest with -DDEBUG. This helps catch bugs and eliminates the need for commits like 202619623022722f30c2ee49931a4fa6896421c7, which changed ASSERTs to VERIFYs. The following files in the illumos tree show this: usr/src/cmd/zdb/Makefile.com usr/src/cmd/ztest/Makefile.com Given the usefulness of having early failure in these tools, we should do it too. Signed-off-by: Richard Yao <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue #4095
* Illumos 5959 - clean up per-dataset feature count codeMatthew Ahrens2015-12-041-14/+32
| | | | | | | | | | | | | | | | | | | | | | | 5959 clean up per-dataset feature count code Reviewed by: Toomas Soome <[email protected]> Reviewed by: George Wilson <[email protected]> Reviewed by: Alex Reece <[email protected]> Approved by: Richard Lowe <[email protected]> References: https://www.illumos.org/issues/5959 https://github.com/illumos/illumos-gate/commit/ca0cc39 Porting notes: illumos code doesn't check for feature_get_refcount() returning ENOTSUP (which means feature is disabled) in zdb. zfsonlinux added a check in https://github.com/zfsonlinux/zfs/commit/784652c due to #3468. The check was reintroduced here. Ported-by: Witaut Bajaryn <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #3965
* Fix zdb_dump_block on little endian systemsChunwei Chen2015-12-021-0/+4
| | | | | | | | | | | | | | | | When dumping a block on a little endian system the data must be byte swapped to display correctly. Example incorrect output: $ echo 0123456789abcdef > aaa $ zdb -eR pp 3:1ee00:200 3:1ee00:200 0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef 000000: 3736353433323130 6665646362613938 0123456789abcdef 000010: 000000000000000a 0000000000000000 ................ Signed-off-by: Chunwei Chen <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #4020
* zdb: segfault in dump_bpobj_subobjs()Tim Chase2015-10-131-2/+2
| | | | | | | | | Avoid buffer overrun on all-zero bpobj subobjects by using signed array index. Also fix the type cast on the printf() argument. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #3905
* Support parallel build trees (VPATH builds)Turbo Fredriksson2015-07-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Build products from an out of tree build should be written relative to the build directory. Sources should be referred to by their locations in the source directory. This is accomplished by adding the 'src' and 'obj' variables for the module Makefile.am, using relative paths to reference source files, and by setting VPATH when source files are not co-located with the Makefile. This enables the following: $ mkdir build $ cd build $ ../configure \ --with-spl=$HOME/src/git/spl/ \ --with-spl-obj=$HOME/src/git/spl/build $ make -s This change also has the advantage of resolving the following warning which is generated by modern versions of automake. Makefile.am:00: warning: source file 'xxx' is in a subdirectory, Makefile.am:00: but option 'subdir-objects' is disabled Signed-off-by: Turbo Fredriksson <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #1082
* Fix for recent zdb -h | -i crashes (seg fault)Don Brady2015-06-262-7/+20
| | | | | | | | | | Allocating SPA_MAXBLOCKSIZE on the stack is a bad idea (even with the old 128K size). Use malloc instead when allocating temporary block buffer memory. Signed-off-by: Don Brady <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #3522
* zdb -d has false positive warning when feature@large_blocks=disabledDon Brady2015-06-261-11/+16
| | | | | | | | Skip large blocks feature refcount checking if feature is disabled. Signed-off-by: Don Brady <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Closes #3468
* Illumos 5369 - arc flags should be an enumGeorge Wilson2015-06-111-1/+1
| | | | | | | | | | | | | | | | | 5369 arc flags should be an enum 5370 consistent arc_buf_hdr_t naming scheme Reviewed by: Matthew Ahrens <[email protected]> Reviewed by: Alex Reece <[email protected]> Reviewed by: Sebastien Roy <[email protected]> Reviewed by: Richard Elling <[email protected]> Approved by: Richard Lowe <[email protected]> Porting notes: ZoL has moved some ARC definitions into arc_impl.h. Signed-off-by: Brian Behlendorf <[email protected]> Ported by: Tim Chase <[email protected]>