| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The type of "adjustmnt" was erroneously changed to unsigned when the compressed
ARC code was ported in d3c2ae1c0806b183a315e3d43cc8018cfdca79b5.
As a result of it being unsigned, the balanced metadata eviction logic
would evict all of the non-metadata.
Reviewed-by: Chris Severance <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed by: David Quigley <[email protected]>
Signed-off-by: Tim Chase <[email protected]>
Closes #5128
Closes #5129
|
|
|
|
|
|
|
|
|
|
| |
The FALLOC_FL_PUNCH_HOLE flag was introduced in the 2.6.38
kernel. To prevent breaking the build on older systems wrap its use
in a conditional. When FALLOC_FL_PUNCH_HOLE isn't available
return a non-zero status and error message.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: legend-hua <[email protected]>
Closes #5101
|
|
|
|
|
|
|
| |
CID 147659, 150952 and 147645
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: luozhengzheng <[email protected]>
Closes #5103
|
|
|
|
|
|
|
|
|
| |
Enable ignore_hole_birth by default until all known hole birth bugs
have been resolved and relevant test cases added.
Reviewed-by: Boris Protopopov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #4809
Closes #5099
|
|
|
|
|
|
|
|
|
| |
This test cause frequently triggers issue #4034. Disable this
test case until the root cause of this issue has been addressed.
Reviewed-by: Olaf Faaland <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #4034
Closes #5120
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simplify time handling in zfs_setattr by mimicking the logic in
setattr_copy from the linux kernel. In order to achieve this
in the case when ZFS' log is being replayed it is necessary
to unconditionally set the ctime in zfs_replay_setattr.
Also use the timespec_trunc function when assigning values to the
generic inode struct. This is currently a noop since zfs sets
s_time_gran to 1, however in the future rules about precision might
change.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Chunwei Chen <[email protected]>
Signed-off-by: Nikolay Borisov <[email protected]>
Closes #4916
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ZFS doesn't provide a custom update_time method meaning it delegates
this job to the generic VFS layer. The only time when it needs to
set the various *time values is when the inode is being marshalled
to/from the disk. Do this by moving the relevant code from
zfs_inode_update_impl to zfs_node_alloc and zfs_rezget. As a result
from this change it is no longer necessary to have multiple versions
of the zfs_inode_update function - so just nuke them and leave only
one.
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Chunwei Chen <[email protected]>
Signed-off-by: Nikolay Borisov <[email protected]>
Issue #227
Closes #4916
|
|
|
|
|
|
|
|
| |
Authored by: Dan Kimmel <[email protected]>
Reviewed by: Tom Caputi <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Ported by: David Quigley <[email protected]>
Issue #5078
|
|
|
|
|
|
|
|
| |
Authored by: Dan Kimmel <[email protected]>
Reviewed by: Tom Caputi <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Ported by: David Quigley <[email protected]>
Issue #5078
|
|
|
|
|
|
|
|
| |
Reviewed by: Dan Kimmel <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Reviewed by: David Quigley <[email protected]>
Signed-off-by: Tom Caputi <[email protected]>
Issue #5078
|
|
|
|
|
|
|
|
| |
Authored by: Dan Kimmel <[email protected]>
Reviewed by: Tom Caputi <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Ported by: David Quigley <[email protected]>
Issue #5078
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Authored by: George Wilson <[email protected]>
Reviewed by: Prakash Surya <[email protected]>
Reviewed by: Dan Kimmel <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Paul Dagnelie <[email protected]>
Reviewed by: Tom Caputi <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Ported by: David Quigley <[email protected]>
This review covers the reading and writing of compressed arc headers, sharing
data between the arc_hdr_t and the arc_buf_t, and the implementation of a new
dbuf cache to keep frequently access data uncompressed.
I've added a new member to l1 arc hdr called b_pdata. The b_pdata always hangs
off the arc_buf_hdr_t (if an L1 hdr is in use) and points to the physical block
for that DVA. The physical block may or may not be compressed. If compressed
arc is enabled and the block on-disk is compressed, then the b_pdata will match
the block on-disk and remain compressed in memory. If the block on disk is not
compressed, then neither will the b_pdata. Lastly, if compressed arc is
disabled, then b_pdata will always be an uncompressed version of the on-disk
block.
Typically the arc will cache only the arc_buf_hdr_t and will aggressively evict
any arc_buf_t's that are no longer referenced. This means that the arc will
primarily have compressed blocks as the arc_buf_t's are considered overhead and
are always uncompressed. When a consumer reads a block we first look to see if
the arc_buf_hdr_t is cached. If the hdr is cached then we allocate a new
arc_buf_t and decompress the b_pdata contents into the arc_buf_t's b_data. If
the hdr already has a arc_buf_t, then we will allocate an additional arc_buf_t
and bcopy the uncompressed contents from the first arc_buf_t to the new one.
Writing to the compressed arc requires that we first discard the b_pdata since
the physical block is about to be rewritten. The new data contents will be
passed in via an arc_buf_t (uncompressed) and during the I/O pipeline stages we
will copy the physical block contents to a newly allocated b_pdata.
When an l2arc is inuse it will also take advantage of the b_pdata. Now the
l2arc will always write the contents of b_pdata to the l2arc. This means that
when compressed arc is enabled that the l2arc blocks are identical to those
stored in the main data pool. This provides a significant advantage since we
can leverage the bp's checksum when reading from the l2arc to determine if the
contents are valid. If the compressed arc is disabled, then we must first
transform the read block to look like the physical block in the main data pool
before comparing the checksum and determining it's valid.
OpenZFS-issue: https://www.illumos.org/issues/6950
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/7fc10f0
Issue #5078
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Authored by: Paul Dagnelie <[email protected]>
Reviewed by: John Wren Kennedy <[email protected]>
Reviewed by: Matthew Ahrens <[email protected]>
Reviewed by: Prakash Surya <[email protected]>
Reviewed by: Yuri Pankov <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Approved by: Dan McDonald <[email protected]>
Ported-by: candychencan <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/7262
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/b868f5d
Closes #5080
|
|
|
|
|
|
| |
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: luozhengzheng <[email protected]>
Closes #5056
|
|
|
|
|
|
|
| |
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: loli10K <[email protected]>
Closes #4503
Closes #5072
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several assignments to arc_c had no effect because it is ultimately
initialized to arc_c_max.
This aligns ZoL better with the upstream code which removed these
assignments some time ago.
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tim Chase <[email protected]>
Closes #5081
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case sav->sav_config was NULL the body of the function
would skip the iteration of the l2 cache devices and will
just cleanup the old devices. However, this wasn't very obvious
since the null check was performed after the loop body and after
the old devices were cleaned. Refactor the code so that it's now
obvious when the iteration of the l2cache devices is skipped.
This fixes the following cppcheck warning:
[module/zfs/spa.c:1552]: (error) Possible null pointer dereference: newvdevs
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Nikolay Borisov <[email protected]>
Closes #5087
|
|
|
|
|
|
|
|
|
|
|
| |
Since they're allocated with spa_strdup(), they should be freed with
spa_strfree() so the proper length buffer is freed.
Reviewed-by: Richard Yao <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tim Chase <[email protected]>
Closes #5082
Closes #5086
|
|
|
|
|
|
| |
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: liuhuang <[email protected]>
Closes #5085
|
|
|
|
|
|
|
|
| |
When errors are detected 'make lint' should return a non-zero
error code. The value 2 was chosen to indicate these are warnings
and not fatal.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
| |
Signed-off-by: Moritz Maxeiner <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Closes #4749
Closes #5058
|
|
|
|
|
|
|
| |
Signed-off-by: Moritz Maxeiner <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Closes #4749
Closes #5058
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Author: John Wren Kennedy <[email protected]>
Reviewed by: Prakash Surya <[email protected]>
Reviewed by: Dan Kimmel <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Paul Dagnelie <[email protected]>
Reviewed by: Don Brady <[email protected]>
Reviewed by: Richard Elling <[email protected]>
Reviewed by: Brian Behlendorf <[email protected]>
Reviewed-by: David Quigley <[email protected]>
Approved by: Richard Lowe <[email protected]>
Ported-by: Don Brady <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/6950
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/dcbf3bd6
Delphix-commit: https://github.com/delphix/delphix-os/commit/978ed49
Closes #4929
ZFS Test Suite Performance Regression Tests
This was pulled into OpenZFS via the compressed arc featureand was
separated out in zfsonlinux as a separate pull request from PR-4768.
It originally came in as QA-4903 in Delphix-OS from John Kennedy.
Expected Usage:
$ DISKS="sdb sdc sdd" zfs-tests.sh -r perf-regression.run
Porting Notes:
1. Added assertions in the setup script to make sure required tools
(fio, mpstat, ...) are present.
2. For the config.json generation in perf.shlib used arcstats and
other binaries instead of dtrace to query the values.
3. For the perf data collection:
- use "zpool iostat -lpvyL" instead of the io.d dtrace script
(currently not collecting zfs_read/write latency stats)
- mpstat and iostat take different arguments
- prefetch_io.sh is a placeholder that uses arcstats instead of
dtrace
4. Build machines require fio, mdadm and sysstat pakage (YMMV).
Future Work:
- Need a way to measure zfs_read and zfs_write latencies per pool.
- Need tools to takes two sets of output and display/graph the
differences
- Bring over additional regression tests from Delphix
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using real devices, specify DISKS="sdb sdc sdd" opposed to
/dev/sdb in zfs-tests.sh - otherwise errors with directory names and
disk names registering as "/dev//dev/sdb" for some tests. The same
goes for mpath: DISK="mpatha mpathad mpathb"
Expected Usage:
$ DISKS="sdb sdc sdd" zfs-tests.sh
SLICE_PREFIX is now set as "p" for a loop device (ie loop0p2) or
"" for a real device (ie sdb2), or either for multipath devices
(ie mpatha1 or mpath1p1) instead of only "p" by default. Note that
kpartx partitioning is not currently supported in this patch
(ie "partx") and may need to be disabled on Debian distributions.
Functions added for determining test directory (/dev or /dev/mapper)
as well as slice prefix are determined and exported mostly in the cfg
file of each test group directory.
Currently zpools cannot be created on whole mpath devices that have
been partitioned. In order to fix this tests have either been revised
to use a partition instead, or if there is a size constraint and the
pool needs to be created on the whole disk, partitions are then deleted
if the device is a multipath device. This functionality is added to
default_cleanup() or to individual cleanup scripts if a non-default
cleanup method is used.
The max partitions is currently set at 8 to account for all of the
tests thus far.
Patch changes are generally encompassed in "if is_linux" construct.
Signed-off-by: Sydney Vanda <[email protected]>
Reviewed-by: John Salinas <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: David Quigley <[email protected]>
Closes #4447
Closes #4964
Closes #5074
|
|
|
|
|
|
| |
First release candidate.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This first phase brings over the ZFS SLM module, zfs_mod.c, to handle
auto operations in response to disk events. Disk event monitoring is
provided from libudev and generates the expected payload schema for
zfs_mod. This work leverages the recently added devid and phys_path
strings in the vdev label.
Signed-off-by: Brian Behlendorf <[email protected]>
Signed-off-by: Don Brady <[email protected]>
Signed-off-by: Tony Hutter <[email protected]>
Closes #4673
|
|
|
|
|
|
| |
Signed-off-by: luozhengzheng <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5055
|
|
|
|
|
|
|
|
|
| |
These allocations can never fail. Leaving the error handling
code here gives the impression they can so it has been removed.
Signed-off-by: luozhengzheng <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5048
|
|
|
|
|
|
|
|
|
|
|
| |
Always free mnpt memory on failure in the zfs_unmount() function.
In the zfs_unshare_proto() function mountpoint is a const and
should not be assigned.
Signed-off-by: cao.xuewen <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5054
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
perf: 2.75x faster ddt_entry_compare()
First 256bits of ddt_key_t is a block checksum, which are expected
to be close to random data. Hence, on average, comparison only needs to
look at first few bytes of the keys. To reduce number of conditional
jump instructions, the result is computed as: sign(memcmp(k1, k2)).
Sign of an integer 'a' can be obtained as: `(0 < a) - (a < 0)` := {-1, 0, 1} ,
which is computed efficiently. Synthetic performance evaluation of
original and new algorithm over 1G random keys on 2.6GHz Intel(R) Xeon(R)
CPU E5-2660 v3:
old 6.85789 s
new 2.49089 s
perf: 2.8x faster vdev_queue_offset_compare() and vdev_queue_timestamp_compare()
Compute the result directly instead of using conditionals
perf: zfs_range_compare()
Speedup between 1.1x - 2.5x, depending on compiler version and
optimization level.
perf: spa_error_entry_compare()
`bcmp()` is not suitable for comparator use. Use `memcmp()` instead.
perf: 2.8x faster metaslab_compare() and metaslab_rangesize_compare()
perf: 2.8x faster zil_bp_compare()
perf: 2.8x faster mze_compare()
perf: faster dbuf_compare()
perf: faster compares in spa_misc
perf: 2.8x faster layout_hash_compare()
perf: 2.8x faster space_reftree_compare()
perf: libzfs: faster avl tree comparators
perf: guid_compare()
perf: dsl_deadlist_compare()
perf: perm_set_compare()
perf: 2x faster range_tree_seg_compare()
perf: faster unique_compare()
perf: faster vdev_cache _compare()
perf: faster vdev_uberblock_compare()
perf: faster fuid _compare()
perf: faster zfs_znode_hold_compare()
Signed-off-by: Gvozden Neskovic <[email protected]>
Signed-off-by: Richard Elling <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5033
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The argument processing is zhack makes the assumption that getopt()
will not permute argv. This isn't true for the GNU implementation of
getopt() unless the optstring is prefixed with a '+'. In which case
this is equivalent to setting the POSIXLY_CORRECT environment variable
In addition, update the usage() and optstrings to reflect the existing
supported options.
Signed-off-by: Brian Behlendorf <[email protected]>
Signed-off-by: liaoyuxiangqin <[email protected]>
Closes #5047
|
|
|
|
|
|
|
|
|
|
|
| |
Older versions of blkid may not promptly detect ZFS labels when
they're located on partitions. In order to ensure this test passes
reliably always perform a scan of default search paths (-s).
Signed-off-by: Brian Behlendorf <[email protected]>
Signed-off-by: liaoyuxiangqin <[email protected]>
Closes #4987
Closes #5047
|
|
|
|
|
|
|
|
|
|
|
|
| |
`zpool get guid,freeing,leaked` shows SOURCE as `default`, it should
be `-` as those props are not editable.
Changed code to not overwrite `src` for `ZPOOL_PROP_VERSION`, so it
stays `ZPROP_SRC_NONE`. Make src const to avoid future mistakes
Signed-off-by: Hajo Möller <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #4170
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issues:
Under Linux, when executing zfs_destroy_004.ksh destroy $fs is an
error. The key issue here is that illumos kernel treats this case
differently than the Linux kernel. On illumos you can unmount and
destroy a filesystem which is busy and all consumers of it get EIO.
On Linux the expected behavior is to prevent the unmount and destroy.
Cause analysis:
When create $fs file system and mount file system to $mntp.
cd $mntp, linux isn't allow to destroy $fs in this mount contents.
No matter what destroy with parameters.
Solution:
So log_mustnot $ZFS destroy $fs is ok.
cd $olddir and destroy $fs.
Signed-off-by: caoxuewen [email protected]
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5012
|
|
|
|
|
|
|
|
|
|
| |
As the scripts zfs_create_003_pos.ksh and zfs_create_006_pos.ksh can
run successfully in the linux, add them to the <linux.run> file to
increase test scene.
Signed-off-by: ChaoyuZhang <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5002
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add helpers which automatically retry the provided command when
the error message matches the provided keyword. This provides an
easy way to handle the asynchronous nature of some ZFS commands.
For example, the `zfs destroy` command may need to be retried in
the case where the block device is unexpected busy. This can be
accomplished as follows:
log_must_busy $ZFS destroy ...
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #5002
|
|
|
|
|
|
|
|
|
|
| |
Update zfs_mount_005_pos.ksh and zfs_mount_010_neg.ksh to reflect
the expected Linux behavior. The is_linux wrapper is used so the
test case may be used on Linux and non-Linux platforms.
Signed-off-by: liuhuang <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5000
|
|
|
|
|
|
|
|
|
|
|
| |
zfsctl_snapdir_inactive is defined in zfs-0.6.3. In zfs-0.6.5.7
this is declaration remains even though the implementation was
removed in commit 278bee93. Removed fastreboot_disable_highpil
which is also unused.
Signed-off-by: caoxuewen [email protected]
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5042
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
From user perspective, I would expect that ZFS is always able
to remove files and directories even when the quota is exceeded.
Authored by: Simon Klinkert <[email protected]>
Reviewed by: Dan McDonald <[email protected]>
Reviewed by: Matthew Ahrens <[email protected]>
Approved by: Robert Mustacchi <[email protected]>
Ported-by: kernelOfTruth [email protected]
Signed-off-by: Brian Behlendorf <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/6940
OpenZFS-issue: https://www.illumos.org/issues/6334
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/9918916
Closes #5044
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For quite some time I was thinking about possibility to prefetch
ZFS indirection tables while doing sequential reads or writes.
Recent changes in predictive prefetcher made that much easier to
do. My tests on zvol with 16KB block size on 5x striped and 2x
mirrored pool of 10 disks show almost double throughput on sequential
read, and almost tripple on sequential rewrite. While for read alike
effect can be received from increasing maximal prefetch distance
(though at higher memory cost), for rewrite there is no other
solution so far.
Authored by: Alexander Motin <[email protected]>
Reviewed by: Matthew Ahrens <[email protected]>
Reviewed by: Paul Dagnelie <[email protected]>
Approved by: Robert Mustacchi <[email protected]>
Ported-by: kernelOfTruth [email protected]
Signed-off-by: Brian Behlendorf <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/6322
OpenZFS-commit: https://github.com/illumos/illumos-gate/commit/cb92f413
Closes #5040
Porting notes:
- Change from upstream in module/zfs/dbuf.c in 'int dbuf_read' due
to commit 5f6d0b6 'Handle block pointers with a corrupt logical size'
- Difference from upstream in module/zfs/dmu_zfetch.c,
uint32_t zfetch_max_idistance -> unsigned int zfetch_max_idistance
- Variables have been initialized at the beginning of the function
(void dmu_zfetch) to resemble the order of occurrence and account
for C99, C11 mode errors.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In dbuf_dirty(), we need to grab the dn_struct_rwlock before looking at
the db_blkptr, to prevent it from being changed by syncing context.
Reviewed by: Prakash Surya <[email protected]>
Reviewed by: George Wilson <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/7086
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/98fa317
Closes #5039
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fix resolves warnings reported during compiling with different gcc
optimization levels in debug mode,
Test tools:
gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
Linux version: 2.6.32-573.18.1.el6.x86_64, Red Hat Enterprise Linux Server release 6.1 (Santiago)
List of warnings:
CFLAGS=-O1 ./configure --enable-debug ;make
../../module/icp/core/kcf_sched.c: In function ‘kcf_aop_done’:
../../module/icp/core/kcf_sched.c:499: error: ‘fg’ may be used uninitialized in this function
../../module/icp/core/kcf_sched.c:499: note: ‘fg’ was declared here
CFLAGS=-Os ./configure --enable-debug ; make
libzfs_dataset.c: In function ‘zfs_prop_set_list’:
libzfs_dataset.c:1575: error: ‘nvl_len’ may be used uninitialized in this function
Signed-off-by: GeLiXin <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5022
|
|
|
|
|
|
|
|
|
|
|
|
| |
The user space implementation of cv_timedwait_hires() was always passing
a relative time to pthread_cond_timedwait() when an absolute time is
expected. This was accidentally introduced in commit 206971d2.
Replace two magic values with their corresponding preprocessor macro.
Signed-off-by: Brian Behlendorf <[email protected]>
Signed-off-by: Richard Yao <[email protected]>
Closes #5024
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ARC will evict meta buffers that exceed the arc_meta_limit. Before a further
investigating on whether we should take special protection on meta buffers,
this tunable make arc_meta_limit adjustable for different workloads.
People can set zfs_arc_meta_limit_percent to any value while insmod zfs.ko,
so some range check is added to guarantee a suitable arc_meta_limit.
Suggested by Tim Chase, zfs_arc_dnode_limit is changed to a percent-style
tunable as well.
Signed-off-by: GeLiXin <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #4957
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As is the case with traverse_prefetch_thread(), the deep stacks caused
by traversal require disabling reclaim in the send traverse thread.
Also, do the same for receive_writer_thread() in which similar problems
have been observed.
Signed-off-by: Tim Chase <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #4912
Closes #4998
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the loop index i comes to (ZFS_GET_NCOLS - 1), the cbp->cb_columns[i + 1]
actually read the data of cbp->cb_colwidths[0], which means the array
subscript is above array bounds.
Luckily the cbp->cb_colwidths[0] is always 0 and it seems we haven't
looped enough times to exceed the array bounds so far, but it's really
a secluded risk someday.
Signed-off-by: GeLiXin <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #5003
|
|
|
|
|
|
|
|
|
|
|
| |
API Change: Module parameter set/get methods take const parameter in
Grsecurity kernel v4.7.1
Signed-off-by: Gvozden Neskovic <[email protected]>
Signed-off-by: Jason Zaman <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #4997
Closes #5001
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using a benchmark which has 32 threads creating 2 million files in the
same directory, on a machine with 16 CPU cores, I observed poor
performance. I noticed that dmu_tx_hold_zap() was using about 30% of
all CPU, and doing dnode_hold() 7 times on the same object (the ZAP
object that is being held).
dmu_tx_hold_zap() keeps a hold on the dnode_t the entire time it is
running, in dmu_tx_hold_t:txh_dnode, so it would be nice to use the
dnode_t that we already have in hand, rather than repeatedly calling
dnode_hold(). To do this, we need to pass the dnode_t down through
all the intermediate calls that dmu_tx_hold_zap() makes, making these
routines take the dnode_t* rather than an objset_t* and a uint64_t
object number. In particular, the following routines will need to have
analogous *_by_dnode() variants created:
dmu_buf_hold_noread()
dmu_buf_hold()
zap_lookup()
zap_lookup_norm()
zap_count_write()
zap_lockdir()
zap_count_write()
This can improve performance on the benchmark described above by 100%,
from 30,000 file creations per second to 60,000. (This improvement is on
top of that provided by working around the object allocation issue. Peak
performance of ~90,000 creations per second was observed with 8 CPUs;
adding CPUs past that decreased performance due to lock contention.) The
CPU used by dmu_tx_hold_zap() was reduced by 88%, from 340 CPU-seconds
to 40 CPU-seconds.
Sponsored by: Intel Corp.
Signed-off-by: Matthew Ahrens <[email protected]>
Signed-off-by: Ned Bass <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/7004
OpenZFS-commit: https://github.com/openzfs/openzfs/pull/109
Closes #4641
Closes #4972
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
zap_lockdir() / zap_unlockdir() should take a "void *tag" argument which
tags the hold on the zap. This will help diagnose programming errors
which misuse the hold on the ZAP.
Sponsored by: Intel Corp.
Signed-off-by: Matthew Ahrens <[email protected]>
Signed-off-by: Pavel Zakharov <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
OpenZFS-issue: https://www.illumos.org/issues/7003
OpenZFS-commit: https://github.com/openzfs/openzfs/pull/108
Closes #4972
|
|
|
|
|
|
|
|
|
|
| |
When spa retry load succeeds and spa recovery is requested it may
leak in spa_load_best function. Always free the generated config
when it is not assigned to the spa.
Signed-off-by: cao.xuewen <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #4940
|