| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
The Lustre packages satify their backend fs requirement by
checking that lustre-backend-fs is provided. Update the zfs
packaging accordingly.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
| |
Create the sixth 0.6.0 release candidate tag (rc6).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Export all the symbols for the system attribute (SA) API. This
allows external module to cleanly manipulate the SAs associated
with a dnode. Documention for the SA API can be found in the
module/zfs/sa.c source.
This change also removes the zfs_sa_uprade_pre, and
zfs_sa_uprade_post prototypes. The functions themselves were
dropped some time ago.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Relying on an /etc/hostid file which is installed in the system
image breaks diskless systems which share an image. Certain
cluster infrastructure such as MPI relies on all nodes having
a unique hostid. However, we still must be careful to ensure
the hostid is syncronized between the initramfs and system
images when using zfs root filesystems.
To accompish this the automatically created /etc/hostid file has
been removed from the spl rpm packaging. The /etc/hostid file
is now dynamically created for your initramfs as part of the
dracut install process. This avoids the need to install it in
the actual system images.
This change also resolves the spl_hostid parameter handling
for dracut.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #398
Closes #399
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The == operator is specific to bash, replace it with the more
correct = operator for sh. This bug can prevent correct booting
when using a zfs root pool.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #416
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the confusion in Linux statfs between f_frsize and f_bsize
the blocks counts were changed to be in units of z_max_blksize
instead of SPA_MINBLOCKSIZE as it is on other platforms.
However, the free files calculation in zfs_statvfs() is limited by
the free blocks count, since each dnode consumes one block/sector.
This provided a reasonable estimate of free inodes, but on Linux
this meant that the free inodes count was underestimated by a large
amount, since 256 512-byte dnodes can fit into a 128kB block, and
more if the max blocksize is increased to 1MB or larger.
Also, the use of SPA_MINBLOCKSIZE is semantically incorrect since
DNODE_SIZE may change to a value other than SPA_MINBLOCKSIZE and
may even change per dataset, and devices with large sectors setting
ashift will also use a larger blocksize.
Correct the f_ffree calculation to use (availbytes >> DNODE_SHIFT)
to more accurately compute the maximum number of dnodes that can
be created.
Signed-off-by: Andreas Dilger <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #413
Closes #400
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When compiling under Debian Lenny with gcc version 4.3.2
(Debian 4.3.2-1.1) the following warning occurs. To quiet
the warning initialize 'error' to zero. Newer versions of
gcc correctly determine that this uninitialized varible is
impossible because ZFS_NUM_USERQUOTA_PROPS is known to be
greater than zero.
cmd/zfs/zfs_main.c:2377: warning: "error" may be
used uninitialized in this function
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
Export all the symbols for the ZAP API. This allows external modules
to cleanly interface with ZAP type objects. Previously only a subset
of the functionality was exposed. Documention for the ZAP API can be
found in the sys/zap.h header.
This change also removes a duplicate zap_increment_int() prototype.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GPT's created by libefi set the HeaderSize attribute in the GPT
header to 512 -- size of the GPT header INCLUDING the 420 padding
bytes at the end. Most other tools set the size to 92 -- size of
the actual header itself excluding the padding. Most tools check
the recorded HeaderSize when verifying CRC, but gptfdisk hardcodes
92 and thus reports CRC verification problems for full-disk vdevs
created IE with `zpool create pool sdc`.
This commit changes libefi's behavior for GPT creation and also
fixes several edge cases where libefi's behavior was similar
(though in an incompatible manner) to gptfdisk. Libefi assumed
HeaderSize was always 512 even if the GPT recorded a different
value. Sanity checks of the GPT headersize read from disk were
added before applying checksum calculation -- this will prevent
segfault in cases of bogus on-disk values.
Zpools created with the resuling libefi are verified as correct
both by parted and gptfdisk. Also pool have been tested to
import correctly on ZFS on Linux as well as Solaris Express 11
livecd.
Signed-off-by: Zachary Bedell <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #344
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mount-zfs.sh script incorrectly parsed results from zpool list. Correct
bootfs attribute was only found on systems with a single pool or where
the bootable pool's name alphabetized to before all other pool names.
Boot failed when the bootable pool's name came after other pools
(IE 'rpool' and 'mypool' would fail to find bootfs on rpool.)
Patch correctly discards pools whose bootfs attribute is blank ('-').
Signed-off-by: Zachary Bedell <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #402
|
|
|
|
|
|
|
|
|
|
|
| |
As written, the $(init_SCRIPTS) rule in etc/init.d/Makefule.am
would not work as expected if the init_SCRIPTS variable were
to contain any elements other than zfs. Fix this by replacing
the hard-coded 'zfs' reference with $@.
Signed-off-by: Ned Bass <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #410
|
|
|
|
|
|
|
| |
Suppress the warning for this large kmem_alloc() because it is not
that far over the warning threshhold (8k) and it is short lived.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The zfs-devel header files for linking with the libspl/libzfs
libraries should be installed under /usr/include not /include.
Ensure the correct install location is used when building an
rpm package.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
| |
Caught by code inspection, the variable zsb was referenced after
being freed. Move the kmem_free() to the end of the function.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
It seems that dracut version 009 through 013 won't boot correctly when
the zfs-dracut rpm package has been installed, but 'root=zfs' isn't
used on the boot commandline, for example when the package has been
installed on a system that _doesn't_ boot from a zfs filesystem.
Signed-off-by: Jeremy Gill <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #377
|
|
|
|
|
|
|
|
|
| |
The zfs.spec.in file had the license field hard coded to specify the
CDDL. This was changed to use the @LICENSE@ variable, maintaining
consistency with the zfs-modules.spec.in file.
Signed-off-by: Prakash Surya <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
| |
The URL field in the zfs-modules and zfs package spec files were
updated to point to the ZFS on Linux repository hosted by github.
Signed-off-by: Prakash Surya <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
| |
The 'if' statements found in kernel.m4 were converted to use the
portable alternative provided by autoconf, the AS_IF macro.
Signed-off-by: Prakash Surya <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A few of the autoconf error messages were inconsistent with the rest of
the build system. To be specific, the inconsistencies addressed by this
commit are the following:
* The second line of the error message for the CONFIG_PREEMPT check
was missing it's third asterisk.
* A few of the error messages were prefixed by two tabs, whereas the
majority of error messages are only prefixed by a single tab.
Signed-off-by: Prakash Surya <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This regression was accidentally introduced by commit aa2b489.
I was attempting to simplify the init scripts and accidentally
confused the /etc/init.d and /etc/zfs paths. This change reverts
the init script modifications.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #370
|
|\
| |
| |
| |
| |
| |
| |
| |
| | |
Merge the remaining udev restructuring changes and cleanup.
Signed-off-by: Brian Behlendorf <[email protected]>
Signed-off-by: Kyle Fuller <[email protected]>
Signed-off-by: Zachary Bedell <[email protected]>
Closes #356
|
| |
| |
| |
| |
| |
| |
| |
| | |
Change the variable substitution in the init script templates
according to the method described in the Autoconf manual;
Chapter 4.7.2: Installation Directory Variables.
Signed-off-by: Brian Behlendorf <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This ensures that module-setup.sh script will always be able to
install the required dracut components regardless of how the zfs
package was configured.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|/
|
|
|
|
|
|
|
|
| |
This rule does not need to be dracut specific. Automatically loading
the zfs module stack when a zfs device is detected is usually desirable.
My only concern is that this might cause trouble for large pools where
we don't want to automatically import the pool until all the disks are
available. However, we'll cross that bridge when we come to it.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The warnings listed in the suppression file will be suppressed
and not flagged during regular buildbot builds. These warnings
are expected, harmless, and can obscure real issues unless they
are suppressed.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
This warning was accidentally introduced by commit
b7936d5c2337bc976ac831c1c38de563844c36b. The fix is to
simply initialize the variable to ZFS_DELEG_WHO_UNKNOWN.
cmd/zfs/zfs_main.c:4460:25: warning: 'who_type' may be
used uninitialized in this function
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
These warnings were accidentally introduced by commit
b7936d5c2337bc976ac831c1c38de563844c36b. The fix is to
simply add the missing format specifier.
cmd/zfs/zfs_main.c:4565: warning: format not a string
literal and no format arguments
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This warning was accidentally introduced by commit
f3ab88d6461dec46dea240763843f66300facfab which updated the
.readpages() implementation. The fix is to simply cast
the helper function to the appropriate type when passed.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Completely disable the zfs binary from attempting to directly update
/etc/mtab. The Linux port relies entirely on the mount.zfs helper
to safely update /etc/mtab. If we left the /etc/mtab updates to
the zfs binary then they could race with concurrent non-zfs mounts.
Routing everything through the system mount command ensures the
/etc/mtab updates are locked properly.
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #329
|
|
|
|
|
|
|
|
|
|
|
| |
The hardened gentoo kernel defines all of the super block
operation callbacks as const. This prevents the autoconf test
from assigning the callback and results in a false negative.
By moving the assignment in to the declaration we can avoid
this issue and get a correct result for this patched kernel.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #296
|
|
|
|
|
|
|
|
| |
Run autogen.sh using the same autotools versions as upstream:
* autoconf-2.63
* automake-1.11.1
* libtool-2.2.6b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change moves the default install location for the zfs udev
rules from /etc/udev/ to /lib/udev/. The correct convention is
for rules provided by a package to be installed in /lib/udev/.
The /etc/udev/ directory is reserved for custom rules or local
overrides.
Additionally, this patch cleans up some abuse of the bindir install
location by adding a udevdir and udevruledir install directories.
This allows us to revert to the default bin install location. The
udev install directories can be set with the following new options.
--with-udevdir=DIR install udev helpers [EPREFIX/lib/udev]
--with-udevruledir=DIR install udev rules [UDEVDIR/rules.d]
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #356
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike the .readpage() callback which is passed a single locked page
to be populated. The .readpages() callback is passed a list of unlocked
pages which are all marked for read-ahead (PG_readahead set). It is
the responsibly of .readpages() to ensure to pages are properly locked
before being populated.
Prior to this change the requested read-ahead pages would be updated
outside of the page lock which is unsafe. The unlocked pages would then
be unlocked again which is harmless but should have been immediately
detected as bug. Unfortunately, newer kernels failed detect this issue
because the check is done with a VM_BUG_ON which is disabled by default.
Luckily, the old Debian Lenny 2.6.26 kernel caught this because it
simply uses a BUG_ON.
The straight forward fix for this is to update the .readpages() callback
to use the read_cache_pages() helper function. The helper function will
ensure that each page in the list is properly locked before it is passed
to the .readpage() callback. In addition resolving the bug, this results
in a nice simplification of the existing code.
The downside to this change is that instead of passing one large read
request to the dmu multiple smaller ones are submitted. All of these
requests however are marked for readahead so the lower layers should
issue a large I/O regardless. Thus most of the request should hit the
ARC cache.
Futher optimization of this code can be done in the future is a perform
analysis determines it to be worthwhile. But for the moment, it is
preferable that code be correct and understandable.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #355
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a long time now the kernel has been moving away from using the
pdflush daemon to write 'old' dirty pages to disk. The primary reason
for this is because the pdflush daemon is single threaded and can be
a limiting factor for performance. Since pdflush sequentially walks
the dirty inode list for each super block any delay in processing can
slow down dirty page writeback for all filesystems.
The replacement for pdflush is called bdi (backing device info). The
bdi system involves creating a per-filesystem control structure each
with its own private sets of queues to manage writeback. The advantage
is greater parallelism which improves performance and prevents a single
filesystem from slowing writeback to the others.
For a long time both systems co-existed in the kernel so it wasn't
strictly required to implement the bdi scheme. However, as of
Linux 2.6.36 kernels the pdflush functionality has been retired.
Since ZFS already bypasses the page cache for most I/O this is only
an issue for mmap(2) writes which must go through the page cache.
Even then adding this missing support for newer kernels was overlooked
because there are other mechanisms which can trigger writeback.
However, there is one critical case where not implementing the bdi
functionality can cause problems. If an application handles a page
fault it can enter the balance_dirty_pages() callpath. This will
result in the application hanging until the number of dirty pages in
the system drops below the dirty ratio.
Without a registered backing_device_info for the filesystem the
dirty pages will not get written out. Thus the application will hang.
As mentioned above this was less of an issue with older kernels because
pdflush would eventually write out the dirty pages.
This change adds a backing_device_info structure to the zfs_sb_t
which is already allocated per-super block. It is then registered
when the filesystem mounted and unregistered on unmount. It will
not be registered for mounted snapshots which are read-only. This
change will result in flush-<pool> thread being dynamically created
and destroyed per-mounted filesystem for writeback.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #174
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While the existing implementation of .writepage()/zpl_putpage() was
functional it was not entirely correct. In particular, it would move
dirty pages in to a clean state simply after copying them in to the
ARC cache. This would result in the pages being lost if the system
were to crash enough though the Linux VFS believed them to be safe on
stable storage.
Since at the moment virtually all I/O, except mmap(2), bypasses the
page cache this isn't as bad as it sounds. However, as hopefully
start using the page cache more getting this right becomes more
important so it's good to improve this now.
This patch takes a big step in that direction by updating the code
to correctly move dirty pages through a writeback phase before they
are marked clean. When a dirty page is copied in to the ARC it will
now be set in writeback and a completion callback is registered with
the transaction. The page will stay in writeback until the dmu runs
the completion callback indicating the page is on stable storage.
At this point the page can be safely marked clean.
This process is normally entirely asynchronous and will be repeated
for every dirty page. This may initially sound inefficient but most
of these pages will end up in a few txgs. That means when they are
eventually written to disk they should be nicely batched. However,
there is room for improvement. It may still be desirable to batch
up the pages in to larger writes for the dmu. This would reduce
the number of callbacks and small 4k buffer required by the ARC.
Finally, if the caller requires that the I/O be done synchronously
by setting WB_SYNC_ALL or if ZFS_SYNC_ALWAYS is set. Then the I/O
will trigger a zil_commit() to flush the data to stable storage.
At which point the registered callbacks will be run leaving the
date safe of disk and marked clean before returning from .writepage.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
| |
This should simplify the code a bit by re-using existing code
to fork and exec a process.
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #190
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simply closing the stdout and/or stderr file descriptors for
the child process can have bad side effects if for example
the child writes to stdout/stderr after open()ing a file.
The open() call might have returned the same file descriptor
one would usually expect for stdout/stderr (1 and 2), thereby
causing mis-directed writes.
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #190
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the moment we call exportfs -v every time we check whether an
NFS share is active. This happens every time you run a zfs or
zpool command, making them extremely slow when you have a lot of
exports. The time taken is approx O(n2) of the number of shares.
This commit stores the output from exportfs -v in a temporary file
and use this to speed up subsequent accesses.
This mechanism is still too slow - if you have tens of thousands
of NFS shares it will still be painful running ANY zfs/zpool
command.
Signed-off-by: Gunnar Beutner <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #341
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Merge in ten upstream fixes which have already been made to both
the Illumos and FreeBSD ZFS implementations. This brings us up
to date with the latest ZFS changes in Illumos.
Credit goes to Martin Matuska of the FreeBSD project for posting
an excellent summary of the upstream patches we were missing.
Illumos #1313: Integer overflow in txg_delay()
Illumos #278: get rid zfs of python and pyzfs dependencies
Illumos #1043: Recursive zfs snapshot destroy fails
Illumos #883: ZIL reuse during remount corruption
Illumos #1092: zfs refratio property
Illumos #1051: zfs should handle
Illumos #510: 'zfs get' enhancement - mountpoint as an argument
Illumos #175: zfs vdev cache consumes excessive memory
Illumos #764: panic in zfs:dbuf_sync_list
Illumos #xxx: zdb -vvv broken after zfs diff integration
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #340
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The function txg_delay() is used to delay txg (transaction group)
threads in ZFS. The timeout value for this function is calculated
using:
int timeout = ddi_get_lbolt() + ticks;
Later, the actual wait is performed:
while (ddi_get_lbolt() < timeout &&
tx->tx_syncing_txg < txg-1 && !txg_stalled(dp))
(void) cv_timedwait(&tx->tx_quiesce_more_cv, &tx->tx_sync_lock,
timeout - ddi_get_lbolt());
The ddi_get_lbolt() function returns current uptime in clock ticks
and is typed as clock_t. The clock_t type on 64-bit architectures
is int64_t.
The "timeout" variable will overflow depending on the tick frequency
(e.g. for 1000 it will overflow in 28.855 days). This will make the
expression "ddi_get_lbolt() < timeout" always false - txg threads will
not be delayed anymore at all. This leads to a slowdown in ZFS writes.
The attached patch initializes timeout as clock_t to match the return
value of ddi_get_lbolt().
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #352
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Remove all python and pyzfs dependencies for consistency and
to ensure full functionality even in a mimimalist environment.
Reviewed by: [email protected]
Reviewed by: [email protected]
Reviewed by: [email protected]
Reviewed by: [email protected]
Approved by: [email protected]
References to Illumos issue and patch:
- https://www.illumos.org/issues/278
- https://github.com/illumos/illumos-gate/commit/1af68beac3
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
Issue #160
Signed-off-by: Brian Behlendorf <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Prior to revision 11314 if a user was recursively destroying
snapshots of a dataset the target dataset was not required to
exist. The zfs_secpolicy_destroy_snaps() function introduced
the security check on the target dataset, so since then if the
target dataset does not exist, the recursive destroy is not
performed. Before 11314, only a delete permission check on
the snapshot's master dataset was performed.
Steps to reproduce:
zfs create pool/a
zfs snapshot pool/a@s1
zfs destroy -r pool@s1
Therefore I suggest to fallback to the old security check, if
the target snapshot does not exist and continue with the destroy.
References to Illumos issue and patch:
- https://www.illumos.org/issues/1043
- https://www.illumos.org/attachments/217/recursive_dataset_destroy.patch
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Moving the zil_free() cleanup to zil_close() prevents this
problem from occurring in the first place. There is a very
good description of the issue and fix in Illumus #883.
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Adam Leventhal <[email protected]>
Reviewed by: Albert Lee <[email protected]>
Reviewed by: Gordon Ross <[email protected]>
Reviewed by: Garrett D'Amore <[email protected]>
Reivewed by: Dan McDonald <[email protected]>
Approved by: Gordon Ross <[email protected]>
References to Illumos issue and patch:
- https://www.illumos.org/issues/883
- https://github.com/illumos/illumos-gate/commit/c9ba2a43cb
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add a "REFRATIO" property, which is the compression ratio based on
data referenced. For snapshots, this is the same as COMPRESSRATIO,
but for filesystems/volumes, the COMPRESSRATIO is based on the
data "USED" (ie, includes blocks in children, but not blocks
shared with the origin).
This is needed to figure out how much space a filesystem would
use if it were not compressed (ignoring snapshots).
Reviewed by: George Wilson <[email protected]>
Reviewed by: Adam Leventhal <[email protected]>
Reviewed by: Dan McDonald <[email protected]>
Reviewed by: Richard Elling <[email protected]>
Reviewed by: Mark Musante <[email protected]>
Reviewed by: Garrett D'Amore <[email protected]>
Approved by: Garrett D'Amore <[email protected]>
References to Illumos issue and patch:
- https://www.illumos.org/issues/1092
- https://github.com/illumos/illumos-gate/commit/187d6ac08a
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Today zfs tries to allocate blocks evenly across all devices.
This means when devices are imbalanced zfs will use lots of
CPU searching for space on devices which tend to be pretty
full. It should instead fail quickly on the full LUNs and
move onto devices which have more availability.
Reviewed by: Eric Schrock <[email protected]>
Reviewed by: Matt Ahrens <[email protected]>
Reviewed by: Adam Leventhal <[email protected]>
Reviewed by: Albert Lee <[email protected]>
Reviewed by: Gordon Ross <[email protected]>
Approved by: Garrett D'Amore <[email protected]>
References to Illumos issue and patch:
- https://www.illumos.org/issues/510
- https://github.com/illumos/illumos-gate/commit/5ead3ed965
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The 'zfs get' command should be able to deal with mountpoint
as an argument. It already works with 'zfs list' command:
# zfs list /export/home/estibi
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home/estibi 1.14G 3.86G 1.14G /export/home/estibi
but it fails with 'zfs get':
# zfs get all /export/home/estibi
cannot open '/export/home/estibi': invalid dataset name
Reviewed by: Eric Schrock <[email protected]>
Reviewed by: Deano <[email protected]>
Reviewed by: Garrett D'Amore <[email protected]>
Approved by: Garrett D'Amore <[email protected]>
References to Illumos issue and patch:
- https://www.illumos.org/issues/510
- https://github.com/illumos/illumos-gate/commit/5ead3ed965
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Note that with the current ZFS code, it turns out that the vdev
cache is not helpful, and in some cases actually harmful. It
is better if we disable this. Once some time has passed, we
should actually remove this to simplify the code. For now we
just disable it by setting the zfs_vdev_cache_size to zero.
Note that Solaris 11 has made these same changes.
References to Illumos issue and patch:
- https://www.illumos.org/issues/175
- https://github.com/illumos/illumos-gate/commit/b68a40a845
Reviewed by: George Wilson <[email protected]>
Reviewed by: Eric Schrock <[email protected]>
Approved by: Richard Lowe <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Hypothesis about what's going on here.
At some time in the past, something, i.e. dnode_reallocate()
calls one of:
dbuf_rm_spill(dn, tx);
These will do:
dbuf_rm_spill(dnode_t *dn, dmu_tx_t *tx)
dbuf_free_range(dn, DMU_SPILL_BLKID, DMU_SPILL_BLKID, tx)
dbuf_undirty(db, tx)
Currently dbuf_undirty can leave a spill block in dn_dirty_records[],
(it having been put there previously by dbuf_dirty) and free it.
Sometime later, dbuf_sync_list trips over this reference to free'd
(and typically reused) memory.
Also, dbuf_undirty can call dnode_clear_range with a bogus
block ID. It needs to test for DMU_SPILL_BLKID, similar to
how dnode_clear_range is called in dbuf_dirty().
References to Illumos issue and patch:
- https://www.illumos.org/issues/764
- https://github.com/illumos/illumos-gate/commit/3f2366c2bb
Reviewed by: George Wilson <[email protected]>
Reviewed by: [email protected]
Reviewed by: Albert Lee <[email protected]
Approved by: Garrett D'Amore <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
|/
|
|
|
|
|
|
| |
References to Illumos issue and patch:
- https://github.com/illumos/illumos-gate/commit/163eb7ff
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #340
|
|
|
|
|
|
| |
Treat the automatically generated zfs.<distro> init scripts
as build products by adding them to a directory specific
.gitignore file.
|