| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Enable commitcheck.sh to test if a commit message is
in the expected format for a coverity defect fix.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Giuseppe Di Natale <[email protected]>
Closes #6777
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use mfu_size and mru_size pulled from the arcstats
kstat file to calculate the mfu and mru percentages
for arc size breakdown.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Richard Elling <[email protected]>
Reviewed-by: AndCycle <[email protected]>
Signed-off-by: Giuseppe Di Natale <[email protected]>
Closes #5526
Closes #6770
|
|
|
|
|
|
|
|
|
| |
Fix new flake8 errors related to bare excepts and ambiguous
variable names due to a STYLE builder update.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Giuseppe Di Natale <[email protected]>
Closes #6776
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ZED is expected to automatically kick in a hot spare device
when there's one available in the pool and a sufficient number of
read errors have been encountered. Use zinject to simulate the
failure condition and verify the hot spare is used.
auto_spare_001_pos.ksh: read IO errors, the vdev is FAULTED
auto_spare_002_pos.ksh: read CHECKSUM errors, the vdev is DEGRADE
Reviewed by: Richard Elling <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: David Quigley <[email protected]>
Closes #6280
|
|
|
|
|
|
|
|
|
| |
Currently the 480GB models of this disk do not use ashift=12 by
default. SSDSC2BW48 is also optimized for 4k blocks.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: adisbladis <[email protected]>
Closes #6774
|
|
|
|
|
|
|
|
|
| |
Provide details about the commit message format for Coverity defect
fixes submitted.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Giuseppe Di Natale <[email protected]>
Closes #6771
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
History commands and events were being suppressed for the
'zpool create' command since the history object did not
yet exist. Create the object earlier so this history
doesn't get lost.
Split the pool_destroy event in to pool_destroy and
pool_export so they may be distinguished.
Updated events_001_pos and events_002_pos test cases. They
now check for the expected history events and were reworked
to be more reliable.
Reviewed-by: Nathaniel Clark <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #6712
Closes #6486
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support integration with new QAT products: Intel(R) C62x Chipset,
or Atom(R) C3000 Processor Product Family SoC:
1. Detect new file name in auto-conf.
2. Change MAX_INSTANCES to 48.
3. Change "num_inst" to U16 to clean a build warning.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Weigang Li <[email protected]>
Closes #6767
|
|
|
|
|
|
|
|
| |
Add get functions to match existing ones.
Reviewed-by: Giuseppe Di Natale <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: John Ramsden <[email protected]>
Closes #6308
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The only place vn_rename and vn_remove are used is when writing
out an updated pool configuration file. By truncating the file
instead of renaming and removing it we can avoid having to implement
these interfaces entirely. Functionally an empty cache file is
treated the same as a missing cache file. This is particularly
advantageous because the Linux kernel has never provided a way
to reliably implement vn_rename and vn_remove.
The cachefile_004_pos.ksh test case was updated to understand
that an empty cache file is the same as a missing one.
The zfs-import-* systemd service files were not updated to use
ConditionFileNotEmpty in place of ConditionPathExists. This
means that after exporting all pools and rebooting new pools
will not the scanned for on the next boot. This small change
should not impact normal usage since pools are not exported
as part of a normal shutdown.
Documentation was updated accordingly.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Arkadiusz Bubała <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes zfsonlinux/spl#648
Closes #6753
|
|
|
|
|
|
|
|
|
| |
This small patch fixes an issue where dmu_free_long_object_raw()
calls dnode_hold() after freeing the dnode a line above.
Reviewed-by: Jorgen Lundman <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tom Caputi <[email protected]>
Closes #6766
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update the codecov.yml included in the repository to behave as
originally intended. This can be refined as needed.
* Always post coverage results to the GitHub PR after two builds
have been uploaded. This is the normal case since there will
be a build uploaded for both kernel and user coverage results.
* Adjust red -> yellow -> green coloring in the web interface.
Due to the number of unlikely error conditions which are hard
to force consider 90% coverage an excellent level of coverage.
* Allow a 1% variance in coverage between test runs. This is
approximately 10x larger than the typical variance observed
which leaves us a reasonable margin to prevent false positives.
* Always post a new smaller comment to PRs which does not include
a file list. Old coverage reports are removed.
Reviewed by: Prakash Surya <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #6765
|
|
|
|
|
|
|
|
|
|
| |
CID 161388: Resource Leak (REASOURCE_LEAK)
Jump to errout so that file descriptor gets closed before returning
from function.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tobin C. Harding <[email protected]>
Closes #6755
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID 147480: Logically dead code (DEADCODE)
Remove non-null check and subsequent function call. Add ASSERT to future
proof the code.
usage label is only jumped to before `zhp` is initialized.
CID 147584: Out-of-bounds access (OVERRUN)
Subtract length of current string from buffer length for `size` argument
to `snprintf`.
Starting address for the write is the start of the buffer + the current
string length. We need to subtract this string length else risk a buffer
overflow.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tobin C. Harding <[email protected]>
Closes #6745
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* config/deb.am: Enable building DKMS packages for Debian
* rpm/generic/zfs-dkms.spec.in: Adjust spec to be Debian-compatible
* Condition kernel-devel Req to RPM distros
* Adjust the DKMS Req to have a minimum of a version only
* Ensure that --rpm_safe_upgrade isn't used on non-RPM distros
* config/deb.am: Drop CONFIG_KERNEL and CONFIG_USER guards
* Makefile.am: Add pkg-dkms target
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Neal Gompa <[email protected]>
Closes #6044
Closes #6731
|
|
|
|
|
|
|
|
|
|
| |
Currently the function documentation states that two strings are
allocated, this is outdated. Only one char ** parameter is passed
into the function now, clearly only a pointer to a single string
is returned and needs to be free'd.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tobin C. Harding <[email protected]>
Closes #6754
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default 128M vdev size used by zloop.sh isn't always large
enough and can result in ENOSPC failures which suspend the pool.
Increase the default size to 512M and provide a -s option which
can be used to specify an alternate size.
This does increase the free space requirements to run zloop.sh.
However, since the vdevs are sparse 4x the space is not required.
Reviewed-by: Don Brady <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #6758
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This PR includes fixes for bugs and documentation issues found
after the encryption patch was merged and general code improvements
for long-term maintainability.
Reviewed-by: Jorgen Lundman <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tom Caputi <[email protected]>
Issue #6526
Closes #6639
Closes #6703
Cloese #6706
Closes #6714
Closes #6595
|
| |
| |
| |
| |
| |
| | |
This 2 line patch fixes a possible integer overflow reported by grsec.
Signed-off-by: Tom Caputi <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch resolves an issue where raw sends would fail to send
encryption parameters if the wrapping key was unloaded and reloaded
before the data was sent and the dataset wass not an encryption root.
The code attempted to lookup the values from the wrapping key which
was not being initialized upon reload. This change forces the code to
lookup the correct value from the encryption root's DSL Crypto Key.
Unfortunately, this issue led to the on-disk DSL Crypto Key for some
non-encryption root datasets being left with zeroed out encryption
parameters. However, this should not present a problem since these
values are never looked at and are overrwritten upon changing keys.
This patch also fixes an issue where raw, resumable sends were not
being cleaned up appropriately if an invalid DSL Crypto Key was
received.
Signed-off-by: Tom Caputi <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch resolves an issue where spa_keystore_change_key_sync_impl()
incorrectly recursed into clone DSL Directories while recursively
rewrapping encryption keys. Clones share keys with their origins, so
this logic was incorrect.
Signed-off-by: Tom Caputi <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Several issues were uncovered by running stress tests with zfs
encryption and raw sends in particular. The issues and their
associated fixes are as follows:
* arc_read_done() has the ability to chain several requests for
the same block of data via the arc_callback_t struct. In these
cases, the ARC would only use the first request's dsobj from
the bookmark to decrypt the data. This is problematic because
the first request might be a prefetch zio which is able to
handle the key not being loaded, while the second might use a
different key that it is sure will work. The fix here is to
pass the dsobj with each individual arc_callback_t so that each
request can attempt to decrypt the data separately.
* DRR_FREE and DRR_FREEOBJECT records in a send file were not
having their transactions properly tagged as raw during raw
sends, which caused a panic when the dbuf code attempted to
decrypt these blocks.
* traverse_prefetch_metadata() did not properly set
ZIO_FLAG_SPECULATIVE when issuing prefetch IOs.
* Added a few asserts and code cleanups to ensure these issues
are more detectable in the future.
Signed-off-by: Tom Caputi <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* PBKDF2 implementation changed to OpenSSL implementation.
* HKDF implementation moved to its own file and tests
added to ensure correctness.
* Removed libzfs's now unnecessary dependency on libzpool
and libicp.
* Ztest can now create and test encrypted datasets. This is
currently disabled until issue #6526 is resolved, but
otherwise functions as advertised.
* Several small bug fixes discovered after enabling ztest
to run on encrypted datasets.
* Fixed coverity defects added by the encryption patch.
* Updated man pages for encrypted send / receive behavior.
* Fixed a bug where encrypted datasets could receive
DRR_WRITE_EMBEDDED records.
* Minor code cleanups / consolidation.
Signed-off-by: Tom Caputi <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This patch resolves a minor issue where an ASSERT in
metaslab_passivate() that only applies to non weight-based
metaslabs was erroneously applied to all metaslabs.
Signed-off-by: Tom Caputi <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The parameters dsl_dataset_t *os in function prototype should be
renamed to dsl_dataset_t *ds.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Damian Wojsław <[email protected]>
Closes #6756
Closes #6273
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The current code base almost compiles on SPARC, but a few fixes are
required for the code to compile (and work efficiently). Code in this
PR comes from OpenZFS project which was initially dropped when porting
the crypto framework.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Pengcheng Xu <[email protected]>
Closes #6733
Closes #6738
Closes #6750
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
CPPASCOMPILE and LTCPPASCOMPILE all include DEFAULT_INCLUDES,
hence it's unnecessary to add the includes again.
Signed-off-by: Pengcheng Xu <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
It's important to respect the user's CFLAGS as mismatched -mcpu
will directly result in the assembler not able to produce correct
code. Fixes #6733.
Signed-off-by: Pengcheng Xu <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Normally a SPARC processor runs in big endian mode. Save the extra labor
needed for little endian machines when the target is a big endian one
(sparc).
Signed-off-by: Pengcheng Xu <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Passing arguments explicitly into SHA1Transform() increases the number of
registers abailable to the compiler, hence leaving more local and out registers
available. The missing symbol of sha1_consts[], which prevents compiling on
SPARC, is added back, which speeds up the process of utilizing the relative
constants.
This should fix #6738.
Signed-off-by: Pengcheng Xu <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Automatic dependency resolution is unreliable on many systems.
Follow suit with existing code, and explicitly include icp
in module dependencies.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Antonio Russo <[email protected]>
Closes #6751
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| | |
* Correct ZFS snapshot listing
* Disable "lvm is not available" message on quiet boot
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Alar Aun <[email protected]>
Closes #6700
Closes #6747
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
The chattr cleanup step may fail to delete the user if there is still
an active process running as that user. Retry the userdel when this
occurs to eliminate spurious false positves.
ERROR: userdel quser1 exited 8
userdel: user quser1 is currently used by process 26814
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #6749
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CID 147474: Logically dead code (DEADCODE)
Remove ternary operator and return `error` directly.
Currently return value is derived from a ternary operator. The
conditional is always true. The ternary operator is therefore
redundant i.e dead code.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tobin C. Harding <[email protected]>
Closes #6723
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When sending an incremental stream based on a snapshot, the receiving
side must have the same base snapshot. Thus we do not need to send
FREEOBJECTS records for any objects past the maximum one which exists
locally.
This allows us to send incremental streams (again) to older ZFS
implementations (e.g. ZoL < 0.7) which actually try to free all objects
in a FREEOBJECTS record, instead of bailing out early.
Reviewed by: Paul Dagnelie <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Fabian Grünbichler <[email protected]>
Closes #5699
Closes #6507
Closes #6616
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All objects after the last written or freed object are not supposed to
exist after receiving the stream. Free them accordingly, as if a
freeobjects record for them had been included in the stream.
Reviewed by: Paul Dagnelie <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Fabian Grünbichler <[email protected]>
Closes #5699
Closes #6507
Closes #6616
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because resuming from a token requires "guid" -> "snapshot" mapping
we have to walk the whole dataset hierarchy to find the right snapshot
to send; when both source and destination exists, for an incremental
resumable stream, libzfs gets confused and picks up the wrong snapshot
to send from: this results in attempting to send
"destination@snap1 -> source@snap2"
instead of
"source@snap1 -> source@snap2"
which fails with a "Invalid cross-device link" error (EXDEV).
Fix this by adjusting the logic behind dataset traversal in
zfs_iter_children() to pick the right snapshot to send from.
Additionally update dry-run 'zfs send -t' to print its output to
stderr: this is consistent with other dry-run commands.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: loli10K <[email protected]>
Closes #6618
Closes #6619
Closes #6623
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the addition of the ABD changes consumption of the virtual
address space has been greatly reduced. This exposed an issue on
CONFIG_HIGHMEM systems where free memory was being calculated
incorrectly. Functionally this didn't cause any major problems
prior to ABD because a lack of available virtual address space
was used as an indicator of low memory.
This patch makes the following changes to address the issue and
in the process realigns the code further with OpenZFS. There
are no substantive changes in behavior for 64-bit systems.
* Added CONFIG_HIGHMEM case to the arc_all_memory() and
arc_free_memory() functions to only consider low memory pages
on CONFIG_HIGHMEM systems.
* The arc_free_memory() function was updated to return bytes
instead of pages to be consistent with the other helper
functions. In user space we make up some reasonable values
since currently only testing is performed in this context.
* Adds three new values to the arcstats kstat to provide visibility
in to the ARC's assessment of the memory situation:
memory_all_bytes, memory_free_bytes, and memory_available_bytes.
* Added kmem_reap() call to arc_available_memory() for 32-bit
builds to realign code with OpenZFS.
* Reduced size of test file in /async_destroy_001_pos.ksh to
speed up test case. Multiple txgs are still required.
* Move vdevs used by zpool_clear_001_pos and zpool_upgrade_002_pos
to TEST_BASE_DIR location to speed up test cases.
Reviewed-by: David Quigley <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5352
Closes #6734
|
|
|
|
|
|
|
|
|
|
|
| |
On Void Linux (x86_64 musl) libgcc_s.so is located in "/usr/lib"
so it is not found by dracut and it produces an error.
Add a simple additional path check for "/usr/lib/libgcc_s.so*"
and install it in the initramfs.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: privb0x23 <[email protected]>
Closes #6715
|
|
|
|
|
|
|
|
| |
When decrementing the struct_size and scatter_chunk_waste kstats
the value needs to be cast to an int on 32-bit systems.
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #6721
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make two instances of the same change. Change bitwise AND (&) to logical
AND (&&).
Currently the code uses a bitwise AND between two boolean values.
In the first instance;
The first operand is a flag that has been bitwise combined with a bit
mask to get a boolean value as to whether a file has group write
permissions set.
The second operand used is a struct member that is intended as a
boolean flag not a bit mask.
In the second instance the argument is the same except with world write
permissions instead of group write (S_IWOTH, S_IWGRP).
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Chris Dunlop <[email protected]>
Signed-off-by: Tobin C. Harding <[email protected]>
Closes #6684
Closes #6722
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently `if` statement includes an assignment (from a function return
value) and a equality check. The parenthesis are in the incorrect place,
currently the code clobbers the function return value because of this.
We can fix this by simplifying the `if` statement.
`if (foo != 0)`
can be more succinctly expressed as
`if (foo)`
Remove the equality check, add parenthesis to correct the statement.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Chris Dunlop <[email protected]>
Signed-off-by: Tobin C. Harding <[email protected]>
Closes #6685
Close #6719
|
|
|
|
|
|
|
|
|
|
|
| |
The vdev_copy_uberblocks() function should use abd_alloc_linear() to
allocate ub_abd, because abd_to_buf(ub_abd)) is used later.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Isaac Huang <[email protected]>
Closes #6718
Closes #6713
|
|
|
|
|
|
|
|
|
|
|
| |
The avl_update_* functions are never used by ZFS and are therefore
being removed. They're barely even used in Illumos. Additionally,
simplify avl_add() by using a VERIFY which produces exactly the same
behavior under Linux.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #6716
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When receiving a FREEOBJECTS record, receive_freeobjects()
incorrectly skips a freed object in some cases. Specifically, this
happens when the first object in the range to be freed doesn't exist,
but the second object does. This leaves an object allocated on disk
on the receiving side which is unallocated on the sending side, which
may cause receiving subsequent incremental streams to fail.
The bug was caused by an incorrect increment of the object index
variable when current object being freed doesn't exist. The
increment is incorrect because incrementing the object index is
handled by a call to dmu_object_next() in the increment portion of
the for loop statement.
Add test case that exposes this bug.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Ned Bass <[email protected]>
Closes #6694
Closes #6695
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's often useful to have access to txg history for debugging
purposes. This patch changes the default from 0 to 100 TXGs
worth of history preserved.
Reviewed by: Matthew Ahrens <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed by: Richard Elling <[email protected]>
Reviewed by: Prakash Surya <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Alek Pinchuk <[email protected]>
Closes #6691
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit d3c2ae1 introduced a dbuf cache with a default size of the
minimum of 100M or 1/32 maximum ARC size. (These figures may be adjusted
using dbuf_cache_max_bytes and dbuf_cache_max_shift.) The dbuf cache
is counted as metadata for the purposes of ARC size calculations.
On a 1GB box the ARC maximum size defaults to c_max 493M which gives a
dbuf cache default minimum size of 15.4M, and the ARC metadata defaults
to minimum 16M. I.e. the dbuf cache is an significant proportion of the
minimum metadata size. With other overheads involved this actually means
the ARC metadata doesn't get down to the minimum.
This patch dynamically scales the dbuf cache to the target ARC size
instead of statically scaling it to the maximum ARC size. (The scale is
still set by dbuf_cache_max_shift and the maximum size is still fixed by
dbuf_cache_max_bytes.) Using the target ARC size rather than the current
ARC size is done to help the ARC reach the target rather than simply
focusing on the current size.
Reviewed-by: Chunwei Chen <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Chris Dunlop <[email protected]>
Issue #6506
Closes #6561
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds four new tests to the ZTS:
* zfs_diff_changes: verify type of changes diplayed (-, +, R and M)
* zfs_diff_cliargs: verify command line options and arguments
* zfs_diff_timestamp: verify 'zfs diff -t'
* zfs_diff_types: verify type of objects (files, dirs, pipes...)
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: John Wren Kennedy <[email protected]>
Signed-off-by: loli10K <[email protected]>
Closes #6686
|
|
|
|
|
|
|
|
|
|
| |
On systems with SCSI rather than SAS disk topology, this change enables
the vdev_id script to match against the block device path, and therefore
create a vdev alias in /dev/disk/by-vdev.
Reviewed-by: Tony Hutter <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Simon Guest <[email protected]>
Closes #6592
|
|
|
|
|
|
|
| |
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Giuseppe Di Natale <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Prakash Surya <[email protected]>
Closes #6675
|