| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are not allowed to dirty a filesystem when done receiving
a snapshot. In this case the flag SPA_FEATURE_LARGE_BLOCKS will
not be set on that filesystem since the filesystem is not on
dp_dirty_datasets, and a subsequent encrypted raw send will fail.
Fix this by checking in dsl_dataset_snapshot_sync_impl() if the feature
needs to be activated and do so if appropriate.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes #13699
Closes #13782
|
|
|
|
|
|
|
|
| |
Found with -Wunused-but-set-variable on Clang trunk
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Ahelenia Ziemiańska <[email protected]>
Closes #13304
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In #13709, as in #11294 before it, it turns out that 63a26454 still had
the same failure mode as when it was first landed as d1d47691, and
fails to unlock certain datasets that formerly worked.
Rather than reverting it again, let's add handling to just throw out
the accounting metadata that failed to unlock when that happens, as
well as a test with a pre-broken pool image to ensure that we never get
bitten by this again.
Fixes: #13709
Signed-off-by: Rich Ercolani <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Tony Hutter <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Special vdevs should not be replaced by a hot spare.
Log vdevs already support this, extending the
functionality for special vdevs.
Reviewed-by: Ryan Moeller <[email protected]>
Reviewed-by: Tony Hutter <[email protected]>
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Ameer Hamza <[email protected]>
Closes #14129
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 68ddc06b611854560fefa377437eb3c9480e084b introduced support
for receiving unencrypted datasets as children of encrypted ones but
unfortunately got the logic upside down. This resulted in failing to
deny receives of incremental sends into encrypted datasets without
their keys loaded. If receiving a filesystem, the receive was done
into a newly created unencrypted child dataset of the target. In
case of volumes the receive made the target volume undeletable since
a dataset was created below it, which we obviously can't handle.
Incremental streams with embedded blocks are affected as well.
We fix the broken logic to properly deny receives in such cases.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Attila Fülöp <[email protected]>
Closes #13598
Closes #14055
Closes #14119
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, additional/extra copies are created for metadata in
addition to the redundancy provided by the pool(mirror/raidz/draid),
due to this 2 times more space is utilized per inode and this decreases
the total number of inodes that can be created in the filesystem. By
setting redundant_metadata to none, no additional copies of metadata
are created, hence can reduce the space consumed by the additional
metadata copies and increase the total number of inodes that can be
created in the filesystem. Additionally, this can improve file create
performance due to the reduced amount of metadata which needs
to be written.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Dipak Ghosh <[email protected]>
Signed-off-by: Akash B <[email protected]>
Closes #13680
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For encrypted raw receive, objset creation is delayed until a call to
dmu_recv_stream(). ZFS_PROP_SHARESMB property requires objset to be
populated when calling zpl_earlier_version(). To correctly handle the
ZFS_PROP_SHARESMB property for encrypted raw receive, this change
delays setting the property.
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Ameer Hamza <[email protected]>
Closes #13878
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When iterating through children physical ashifts for vdev, prefer
ones above the maximum logical ashift, that we can actually use,
but within the administrator defined maximum.
When selecting top-level vdev ashift, do not set it to the defined
maximum in case physical ashift is even higher, but just ignore one.
Using the maximum does not prevent misaligned writes, but reduces
space efficiency. Since ZFS tries to write data sequentially and
aggregates the writes, in many cases large misanigned writes may be
not as bad as the space penalty otherwise.
Allow internal physical ashifts for vdevs higher than SHIFT_MAX.
May be one day allocator or aggregation could benefit from that.
Reduce zfs_vdev_max_auto_ashift default from 16 (64KB) to 14 (16KB),
so that ZFS may still use bigger ashifts up to SHIFT_MAX (64KB),
but only if it really has to or explicitly told to, but not as an
"optimization".
There are some read-intensive NVMe SSDs that report Preferred Write
Alignment of 64KB, and attempt to build RAIDZ2 of those leads to a
space inefficiency that can't be justified. Instead these changes
make ZFS fall back to logical ashift of 12 (4KB) by default and
only warn user that it may be suboptimal for performance.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Alexander Motin <[email protected]>
Sponsored by: iXsystems, Inc.
Closes #13798
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add physical device size/capacity only for physical devices in
'zpool list -v' instead of displaying "-" in the SIZE column.
This would make it easier to see the individual device capacity and
to determine which spares are large enough to replace which devices.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Tony Hutter <[email protected]>
Reviewed-by: Dipak Ghosh <[email protected]>
Signed-off-by: Akash B <[email protected]>
Closes #12561
Closes #13106
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`zpool_expand_001_pos` was often failing due to not seeing autoexpand
commands in the `zpool history`. During testing, I found this to be
unreliable (sometimes the "online" wouldn't appear in `zpool history`)
and unnecessary, as we could simply check that the pool increased in
size.
This commit revamps the test to check for the expanded pool size
and corresponding new free space.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tony Hutter <[email protected]>
Closes #13743
|
|
|
|
|
|
|
|
| |
Use large numbers for datasets with
numeric names to avoid name and id
collisions.
Signed-off-by: Paul Zuchowski <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
zdb -d <pool>/<objset ID> does not work when
other command line arguments are included i.e.
zdb -U <cachefile> -d <pool>/<objset ID>
This change fixes the command line parsing
to handle this situation. Also fix issue
where zdb -r <dataset> <file> does not handle
the root <dataset> of the pool. Introduce -N
option to force <objset ID> to be interpreted
as a numeric objsetID.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Rich Ercolani <[email protected]>
Reviewed-by: Tony Nguyen <[email protected]>
Signed-off-by: Paul Zuchowski <[email protected]>
Closes #12845
Closes #12944
|
|
|
|
|
|
|
|
|
|
|
|
| |
Not all Linux distribution kernels enable io_uring support by
default. Update the run time check to verify that the booted
kernel was built with CONFIG_IO_URING=y.
Reviewed-by: Tony Hutter <[email protected]>
Reviewed-by: Tony Nguyen <[email protected]>
Co-authored-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13648
Closes #13685
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When scrubbing a raidz/draid pool, which contains a replacing or
sparing mirror with multiple online children, only one child will
be read. This is not normally a serious concern because the DTL
records are used to determine where a good copy of the data is.
As long as the data can be read from one child the mirror vdev
will use it to repair gaps in any of its children. Furthermore,
even if the data which was read is corrupt the raidz code will
detect this and issue its own repair I/O to correct the damage
in the mirror vdev.
However, in the scenario where the DTL is wrong due to silent
data corruption (say due to overwriting one child) and the scrub
happens to read from a child with good data, then the other damaged
mirror child will not be detected nor repaired.
While this is possible for both raidz and draid vdevs, it's most
pronounced when using draid. This is because by default the zed
will sequentially rebuild a draid pool to a distributed spare,
and the distributed spare half of the mirror is always preferred
since it delivers better performance. This means the damaged
half of the mirror will go undetected even after scrubbing.
For system administrations this behavior is non-intuitive and in
a worst case scenario could result in the only good copy of the
data being unknowingly detached from the mirror.
This change resolves the issue by reading all replacing/sparing
mirror children when scrubbing. When the BP isn't available for
verification, then compare the data buffers from each child. They
must all be identical, if not there's silent damage and an error
is returned to prompt the top-level vdev to issue a repair I/O to
rewrite the data on all of the mirror children. Since we can't
tell which child was wrong a checksum error is logged against the
replacing or sparing mirror vdev.
Reviewed-by: Mark Maybee <[email protected]>
Reviewed-by: Tony Hutter <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13555
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out, no, in fact, ZERO_RANGE and PUNCH_HOLE do
have differing semantics in some ways - in particular,
one requires KEEP_SIZE, and the other does not.
Also added a zero-range test to catch this, corrected a flaw
that made the punch-hole test succeed vacuously, and a typo
in file_write.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Rich Ercolani <[email protected]>
Closes #13329
Closes #13338
|
|
|
|
|
|
|
|
| |
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Rich Ercolani <[email protected]>
Signed-off-by: Ahelenia Ziemiańska <[email protected]>
Upstream-commit: 344bbc82e7054f61d5e7b3610b119820285fd2cb
Closes #12829
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When unlinking multiple files from a pool at 100% capacity, it was
possible for ENOSPC to be returned after the first unlink. e.g.
rm -f /mnt/fs/test1.0.0 /mnt/fs/test1.1.0 /mnt/fs/test1.2.0
rm: cannot remove '/mnt/fs/test1.1.0': No space left on device
rm: cannot remove '/mnt/fs/test1.2.0': No space left on device
After waiting for the pending deferred frees from the first unlink to
be processed the remaining files can then be unlinked. This is caused
by the quota limit in dsl_dir_tempreserve_impl() being temporarily
decreased to the allocatable pool capacity less any deferred free
space.
This is resolved using the existing mechanism of returning ERESTART
when over quota as long as we know enough space will shortly be
available after processing the pending deferred frees.
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Reviewed-by: Tony Hutter <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13172
|
|
|
|
|
|
|
|
|
|
|
| |
In the CI environment it's possible for events to be slightly
delayed resulting in 4, instead of 5, events appearing in the
log file. This isn't a problem and should be considered a
success to avoid false positive test results.
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #12625
|
|
|
|
|
|
|
|
|
|
|
| |
Related to commit 90b77a036. Retry the `zpool export` if the pool
is "busy" indicating there is a process accessing the mount point.
This can happen after an import, allowing it to be retried will
avoid spurious test failures.
Reviewed-by: Tony Hutter <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13169
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As explained by the disclaimer in the test case,
"This test can fail since nothing guarantees that old
MOS blocks aren't overwritten."
This behavior is expected and correct, but results in a
flaky test case which is problematic for the CI. The best
we can do to resolve this is to retry the sub-test which
failed when the MOS blocks have clearly been overwritten.
When testing failures were rare enough that a single retry
should normally be sufficient. However, we allow up to
five for good measure.
Reviewed by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13119
|
|
|
|
|
|
|
|
|
|
|
|
| |
As previously noted in #12272 the receive-o-x_props_override.ksh test
reliably fails on FreeBSD. Since we don't expect this test to pass
move the exception from the "maybe" to "known" section. This way we
don't retry the FAILED test when it is not expected to pass.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13167
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On FreeBSD pools are not allowed to be created using vdevs which are
backed by ZFS volumes. This configuration is not recommended for any
supported platform, nevertheless the largest_pool_001_pos.ksh test
case makes use of it as a convenience. This causes the test case to
fail reliably on FreeBSD. The layout is still tolerated on Linux
so only perform this test on Linux.
Reviewed-by: Igor Kozhukhov <[email protected]>
Reviewed by: George Melikov <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13166
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Raw sending from pool1/encrypted with ashift=9 to pool2/encrypted with
ashift=12 results to failure when mounting pool2/encrypted (Input/Output
error). Notably, the opposite, raw sending from a greater ashift to a
lower one does not fail.
This happens because zio_compress_write() falsely checks only
ZIO_FLAG_RAW_COMPRESS and not ZIO_FLAG_RAW_ENCRYPT which is also set in
encrypted raw send streams. In this case it rounds up the psize and if
not equal to the zio->io_size it modifies the block by zeroing out
the extra bytes. Because this happens in a SA attr. registration object
(type=46), the decryption fails upon mounting the filesystem, and zpool
status falsely reports an error.
Fix this by checking both ZIO_FLAG_RAW_COMPRESS and ZIO_FLAG_RAW_ENCRYPT
before deciding whether to zero-pad a block.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes #13067
Closes #13074
|
|
|
|
|
|
|
|
|
|
| |
Related to commit 90b77a036. Retry the `zpool export` if the pool is
"busy" indicating there is a process accessing the mount point. This
can happen after an import and allowing it to be retried will avoid
spurious test failures.
Reviewed by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13092
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dRAID section of the zpool_expand_001_pos test would reliably fail
because the calculated expansion size assumed the dRAID top-level vdev
was created with a distributed spare. Create the vdev as expected to
resolve the test failure.
This test case flaw was accidentally caused by changing the default
number of dRAID distributed spares from one to zero while dRAID was
being developed.
Additionally, remove zpool_expand_005_pos from the list of possible
faulty tests. It appears to be passing consistently in my testing.
Reviewed by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13091
|
|
|
|
|
|
|
|
|
|
|
|
| |
Changing volmode may need to remove minors, which could be open, so
call udev_wait() before we "zfs set volmode=<value>". This ensures
no udev process has the zvol open (i.e. blkid) and the kernel
zvol_remove_minor_impl() function won't skip removing the in use
device.
Reviewed-by: John Kennedy <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13075
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
dmu_recv_begin_check() unconditionally sets the DS_HOLD_FLAG_DECRYPT
flag before calling dsl_dataset_hold_flags(). If the key on the
receiving side isn't loaded or the send stream contains embedded
blocks, the receive check fails for a stream which is perfectly
valid and could be received without any problem. This seems like
a remnant of the initial design, where unencrypted datasets below
encrypted ones weren't allowed.
Add a condition to set `DS_HOLD_FLAG_DECRYPT` only for encrypted
datasets, modify an existing test to detect this regression and add
a test for raw replication streams.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: George Amanakis <[email protected]>
Co-authored-by: George Amanakis <[email protected]>
Signed-off-by: Attila Fülöp <[email protected]>
Closes #13033
Closes #13076
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Linux 5.16 moved XSTATE_XSAVE and XSTATE_XRESTORE out of our reach,
so add our own XSAVE{,OPT,S} code and use it for Linux 5.16.
Please note that this differs from previous behavior in that it
won't handle exceptions created by XSAVE an XRSTOR. This is sensible
for three reasons.
- Exceptions during XSAVE and XRSTOR can only occur if the feature
is not supported or enabled or the memory operand isn't aligned
on a 64 byte boundary. If this happens something else went
terribly wrong, and it may be better to stop execution.
- Previously we just printed a warning and didn't handle the fault,
this is arguable for the above reason.
- All other *SAVE instruction also don't handle exceptions, so this
at least aligns behavior.
Finally add a test to catch such a regression in the future.
Reviewed-by: Tony Hutter <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Attila Fülöp <[email protected]>
Closes #13042
Closes #13059
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Basenames that remain, in cmd/zed/zed.d/statechange-led.sh:
dev=$(basename "$(echo "$therest" | awk '{print $(NF-1)}')")
vdev=$(basename "$ZEVENT_VDEV_PATH")
I don't wanna interfere with #11988
scripts/zfs-tests.sh:
SINGLETESTFILE=$(basename "$SINGLETEST")
tests/zfs-tests/tests/functional/cli_user/zfs_list/zfs_list.kshlib:
ACTUAL=$(basename $dataset)
ACTUAL=$(basename $dataset)
tests/zfs-tests/tests/functional/cli_user/zpool_iostat/
zpool_iostat_-c_homedir.ksh:
typeset USER_SCRIPT=$(basename "$USER_SCRIPT_FULL")
tests/zfs-tests/tests/functional/cli_user/zpool_iostat/
zpool_iostat_-c_searchpath.ksh:
typeset CMD_1=$(basename "$SCRIPT_1")
typeset CMD_2=$(basename "$SCRIPT_2")
tests/zfs-tests/tests/functional/cli_user/zpool_status/
zpool_status_-c_homedir.ksh:
typeset USER_SCRIPT=$(basename "$USER_SCRIPT_FULL")
tests/zfs-tests/tests/functional/cli_user/zpool_status/
zpool_status_-c_searchpath.ksh
typeset CMD_1=$(basename "$SCRIPT_1")
typeset CMD_2=$(basename "$SCRIPT_2")
tests/zfs-tests/tests/functional/migration/migration.cfg:
export BNAME=`basename $TESTFILE`
tests/zfs-tests/tests/perf/perf.shlib:
typeset logbase="$(get_perf_output_dir)/$(basename \
tests/zfs-tests/tests/perf/perf.shlib:
typeset logbase="$(get_perf_output_dir)/$(basename \
These are potentially Of Directories, where basename is actually
useful
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: John Kennedy <[email protected]>
Signed-off-by: Ahelenia Ziemiańska <[email protected]>
Closes #12652
|
|
|
|
|
|
|
|
|
|
|
| |
The on-disk cost of creating a snapshot or bookmark is sufficiently low
that it is difficult to make it reliably fail even when the pool is
"full". In order to avoid false positives remove these two checks from
the test case.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: John Kennedy <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #13060
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
POSIX requires that set-uid and set-gid bits to be removed when an
unprivileged user writes to a file and ZFS does that during normal
operation.
The problem arrises when the write is stored in the ZIL and replayed.
During replay we have no access to original credentials of the process
doing the write, so zfs_write() will be performed with the root
credentials. When root is doing the write set-uid and set-gid bits
are not removed from the file.
To correct that, log a separate TX_SETATTR entry that removed those bits
on first write to such file.
Idea from: Christian Schwarz
Add test for ZIL replay of setuid/setgid clearing.
Improve various edge cases when clearing setid bits:
- The setid bits can be readded during a single write, so make sure to check
for them on every chunk write.
- Log TX_SETATTR record at most once per transaction group (if the setid bits
are keep coming back).
- Move zfs_log_setattr() outside of zp->z_acl_lock.
Reviewed-by: Dan McDonald <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Co-authored-by: Christian Schwarz <[email protected]>
Signed-off-by: Pawel Jakub Dawidek <[email protected]>
Closes #13027
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds enumerated names to disambiguate between the
different vdevs. Previously only 'zpool status' showed enumerated
vdev names, now 'zpool list -v' and 'zpool iostat -v' also shows
the enumerated vdev names.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Dipak Ghosh <[email protected]>
Signed-off-by: Akash B <[email protected]>
Closes #12510
Closes #13031
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In files created/modified before 4254acb there may be a corruption of
xattrs which is not reported during scrub and normal send/receive. It
manifests only as an error when raw sending/receiving. This happens
because currently only the raw receive path checks for discrepancies
between the dnode bonus length and the spill pointer flag.
In case we encounter a dnode whose bonus length is greater than the
predicted one, we should report an error. Modify in this regard
dnode_sync() with an assertion at the end, dump_dnode() to error out,
dsl_scan_recurse() to report errors during a scrub, and zstream to
report a warning when dumping. Also added a test to verify spill blocks
are sent correctly in a raw send.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes #12720
Closes #13014
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a follow up commit for e03a41a60 which aimed to resolve
this same test failure. The core "problem" here is that it takes
very little space to perform a clone/snapshot/bookmark, which
means if we want these commands to reliably fail the pool must
truely have exhausted all free space.
This commit increases the number of fill iterations to try and
consume every block which we can. This still can't guarantee
the clone/snapshot/bookmark will fail, but it significantly
improves the odds. The exception was kept since it's still
not a sure thing.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: John Kennedy <[email protected]>
Reviewed-by: Igor Kozhukhov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #12903
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Under Linux when rolling back a mounted filesystem negative dentries
may not be dropped from the cache. This can result in an ENOENT
being incorrectly returned on first access. Issuing a `df` before
the unmount results in the negative dentries being invalidated and
side steps the issue.
This is solely a workaround for the test case on Linux and not
correct behavior. The core issue of invalidating negative dentries
needs to be handled with a kernel side change. This is being
tracked as issue #6143.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: John Kennedy <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #12898
Issue #6143
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following test cases may still occasionally fail and are being
added to the "maybe" list for Linux until they can be updated to be
entirely reliable.
cli_root/zfs_rename/zfs_rename_002_pos.ksh
cli_root/zpool_reopen/zpool_reopen_003_pos.ksh
refreserv/refreserv_raidz
These 6 tests consistently fail only on Fedora 31+, the failures
are related to the kernel rescanning the partition table on loopback
devices which is no longer reliable unless partprobe is used. In
order to enable the Fedora bot by default they are also being added
to the list until the tests can be updated. Any significant regression
in functionality covered by these tests will still be detected by the
FreeBSD builders.
alloc_class/alloc_class_009_pos
alloc_class/alloc_class_010_pos
cli_root/zpool_expand/zpool_expand_001_pos
cli_root/zpool_expand/zpool_expand_005_pos
rsend/rsend_007_pos
rsend/rsend_010_pos
rsend/rsend_011_pos
snapshot/rollback_003_pos
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #10489
|
|
|
|
|
|
|
|
|
|
|
|
| |
It keeps failing, on changes which aren't related at all.
So until someone runs down why, I'd like it to stop being the
sole reason for CI failures.
Reviewed-by: John Kennedy <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Rich Ercolani <[email protected]>
Closes #12733
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the following test failures to the exception list for FreeBSD
to ensure we notice new unexpected failures.
pool_checkpoint/checkpoint_big_rewind
pool_checkpoint/checkpoint_indirect
And the following for Linux.
zvol/zvol_misc/zvol_misc_snapdev
Reviewed-by: George Melikov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #12621
Issue #12622
Issue #12623
Closes #12624
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The zvol_misc tests, in particular zvol_misc_volmode, make use of a
common udev_wait function to wait for zvol devices in /dev to quiesce
on Linux. On other platforms this function currently only sleeps for
one second before returning. This is insufficient, and
zvol_misc_volmode has been flaky on FreeBSD as a result.
Replace udev_wait with block_device_wait, passing through the optional
device parameter where possible. Rearrange a few checks to strengthen
the verifications we are making and avoid unnecessarily sleeping. We
must keep udev_wait in a couple places to pass in Github CI workflows.
Remove zvol_misc_volmode from the maybe failing tests on FreeBSD in
zts-report.py.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: John Kennedy <[email protected]>
Signed-off-by: Ryan Moeller <[email protected]>
Closes #12583
|
|
|
|
|
|
|
|
|
| |
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Ka Ho Ng <[email protected]>
Sponsored-by: The FreeBSD Foundation
Closes #12458
|
|
|
|
|
|
|
|
|
|
| |
The rerefreserv_raidz test was failing on Linux because the sync being
issued doesn't guarantee a pool sync. Switch to using the sync_pool
function and remove the ZTS exception for Linux.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: John Kennedy <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #12897
|
|
|
|
|
|
|
|
| |
The build on musl needs linux/fs.h for SEEK_DATA and friends,
and sys/sysmacros.h for P2ROUNDUP. Add the needed headers.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Georgy Yakovlev <[email protected]>
Closes #12891
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With some minor tweaks several of rsend tests can be sped up
considerably without significantly reducing test coverage.
* send-c_verify_ratio: ~120s -> ~60s
* send_realloc_*_files: ~330s -> ~65s
For the send_realloc* tests this also has the advantage of removing
(most of) the linux/freebsd conditional logic. Note that for this
test more passes, and thus more incremental send/recvs, are preferable
to a larger number of files.
Total run time of the rsend test group was reduced from roughly 20 to
11 minutes in an environment similar to what's used by the CI.
Reviewed-by: Tony Nguyen <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #12876
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The rsend_007_pos test reliably fails on Linux in the cleanup
function. This is caused by an unmount error when attempting to
recursively destroy the newly received datasets. Invoking `df`
prior to the `zfs destroy` interestingly avoids the unmont error.
Why this should matter is unclear and should be investigated.
However, this minor tweak may allow us to remove the ZTS rsend
exceptions. The subsequent rsend_010_pos and rsend_011_pos
failures were a result of this initial failure. The other
"maybe" failures I was unable to reproduce and have not been
recently observed in the master branch.
Reviewed-by: Tony Nguyen <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5665
Closes #6086
Closes #6087
Closes #6446
Closes #12876
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for http and https to the keylocation properly to
allow encryption keys to be fetched from the specified URL.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Ryan Moeller <[email protected]>
Signed-off-by: Ahelenia Ziemiańska <[email protected]>
Issue #9543
Closes #9947
Closes #11956
|
|
|
|
|
|
| |
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Ahelenia Ziemiańska <[email protected]>
Issue: #11956
Closes #11976
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The alloc_class_* tests may fail on Linux with an EBUSY error if
`zfs destroy` is run before the `dd` process has had a chance to
terminate. Wait on the pid after the `kill -9` to make sure.
When testing I didn't observe any failures for the alloc_class
tests. Remove them from the exceptions list, the CI was used to
verify the tests pass on all platforms.
Reviewed-by: John Kennedy <[email protected]>
Reviewed-by: Rich Ercolani <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #12873
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unfortunately, #11445 means while we fail gracefully now, we still
fail, unless people want to implement a complex workaround just to
support /dev/null.
So let's just use the cheap workaround in a test for now.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: John Kennedy <[email protected]>
Signed-off-by: Rich Ercolani <[email protected]>
Closes #12872
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The zpool_reopen_[1-5] tests are failing Fedora 35 with:
zpool_reopen_001_pos.ksh[64]: log_must[67]: log_pos[270]:
wait_for_resilver_end[98]: wait_for_action: line 71: func: is read only
Renaming 'func' -> 'funct' fixes the issue.
Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tony Hutter <[email protected]>
Closes #12871
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Raw receiving a snapshot back to the originating dataset is currently
impossible because of user accounting being present in the originating
dataset.
One solution would be resetting user accounting when raw receiving on
the receiving dataset. However, to recalculate it we would have to dirty
all dnodes, which may not be preferable on big datasets.
Instead, we rely on the os_phys flag
OBJSET_FLAG_USERACCOUNTING_COMPLETE to indicate that user accounting is
incomplete when raw receiving. Thus, on the next mount of the receiving
dataset the local mac protecting user accounting is zeroed out.
The flag is then cleared when user accounting of the raw received
snapshot is calculated.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: George Amanakis <[email protected]>
Closes #12981
Closes #10523
Closes #11221
Closes #11294
Closes #12594
Issue #11300
|