| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
The dracut code is analogous to the initramfs code and as such
it should be located in the contrib with initramfs for consistency.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
| |
Include information about zfs-lib.sh.in and mention that it is possible
to set the bootfs attribute for an entire pool.
Signed-off-by: Sören Tempel <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #3109
|
|
|
|
|
|
| |
Signed-off-by: Sören Tempel <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #3109
|
|
|
|
|
|
| |
Signed-off-by: Sören Tempel <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #3109
|
|
|
|
|
|
|
|
| |
Provide '/lib/dracut-zfs-lib.sh' with utility functions.
Signed-off-by: Sören Tempel <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #3109
|
|
|
|
|
|
|
|
|
|
|
| |
Execute udevadm settle before trying to import pools. Otherwise the
disk device nodes may not be ready before import time. This is
analogous to the behavior of the init scripts and systemd units.
Signed-off-by: Gordan Bobic <[email protected]>
Signed-off-by: Pavel Snajdr <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #3213
|
|
|
|
|
|
|
|
|
| |
If we are not booting from ZFS, parse-zfs.sh fails because of
an unset variable.
Signed-off-by: Steffen M<C3><BC>thing <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #3110
|
|
|
|
|
|
|
|
|
|
| |
The dracut module installs the udev rules and the vdev_id utility for creating
the /dev/disk/by-vdev/ names, but omits some additional utilities and the
config file required by vdev_id.
Signed-off-by: Steffen M<C3><BC>thing <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #3110
|
|
|
|
|
|
|
|
| |
Simplify install() by removing the need for a temp file.
Signed-off-by: Soeren Tempel <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #3093
|
|
|
|
|
|
|
|
| |
Use the correct operators to check the expected data type.
Signed-off-by: Soeren Tempel <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #3093
|
|
|
|
|
|
|
|
|
| |
The ability to use blanks is documented in zpool(8) and implemented
in module/zcommon/zfs_namecheck.c:valid_char().
Signed-off-by: Lukas Wunner <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #3083
|
|
|
|
|
|
|
|
|
|
|
| |
The shell executes each command of a pipeline in a subshell,
thus $ret always had the same value after the while loop that
it had before the loop (http://mywiki.wooledge.org/BashFAQ/024),
signaling success even if some of the zpools could not be exported.
Signed-off-by: Lukas Wunner <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #3083
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make use of Dracut's ability to restore the initramfs on shutdown and
pivot to it, allowing for a clean unmount and export of the ZFS root.
No need to force-import on every reboot anymore.
Signed-off-by: Lukas Wunner <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #2195
Issue #2476
Issue #2498
Issue #2556
Issue #2563
Issue #2575
Issue #2600
Issue #2755
Issue #2766
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove lines that contain only a hyphen (match '^-$' instead of '-').
I had a root fs with a hyphen in the name (fedora/ROOT/Fedora20-Dev),
it was not detected because sed eliminated that line of output from
'zpool list -Ho bootfs'.
Signed-off-by: Evan Susarret <[email protected]>
Signed-off-by: Turbo Fredriksson <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #2196
|
|
|
|
|
|
|
|
| |
60-zpool.rules was retired some time ago in favor of 69-vdev.rules.
Additionally, there is no guarentee a zpool.cache file exists so
only install it conditionally upon its existance.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The standard dracut directory has moved from /usr/share/dracut to
/usr/lib/dracut. To ensure the dracut modules get installed in
the correct location provide a --with-dracutdir configure option
to set the path.
The default install location has been updated to /usr/lib/dracut
which is used by more current versions of Fedora. However, this
default is overriden by the RPM packaging for consistency.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the interest of maintaining only one udev helper to give vdevs
user friendly names, the zpool_id and zpool_layout infrastructure
is being retired. They are superseded by vdev_id which incorporates
all the previous functionality.
Documentation for the new vdev_id(8) helper and its configuration
file, vdev_id.conf(5), can be found in their respective man pages.
Several useful example files are installed under /etc/zfs/.
/etc/zfs/vdev_id.conf.alias.example
/etc/zfs/vdev_id.conf.multipath.example
/etc/zfs/vdev_id.conf.sas_direct.example
/etc/zfs/vdev_id.conf.sas_switch.example
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #981
|
|
|
|
|
|
|
|
| |
Remove all of the generated autotools products from the repository
and update the .gitignore files accordingly.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #718
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, zvols have a discard granularity set to 0, which suggests to
the upper layer that discard requests of arbirarily small size and
alignment can be made efficiently.
In practice however, ZFS does not handle unaligned discard requests
efficiently: indeed, it is unable to free a part of a block. It will
write zeros to the specified range instead, which is both useless and
inefficient (see dnode_free_range).
With this patch, zvol block devices expose volblocksize as their discard
granularity, so the upper layer is aware that it's not supposed to send
discard requests smaller than volblocksize.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #862
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The end_writeback() function was changed by moving the call to
inode_sync_wait() earlier in to evict(). This effecitvely changes
the ordering of the sync but it does not impact the details of
the zfs implementation.
However, as part of this change end_writeback() was renamed to
clear_inode() to reflect the new semantics. This change does
impact us and clear_inode() now maps to end_writeback() for
kernels prior to 3.5.
Signed-off-by: Richard Yao <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #784
|
|
|
|
|
|
|
|
|
| |
The vmtruncate_range() support has been removed from the kernel in
favor of using the fallocate method in the file_operations table.
Signed-off-by: Richard Yao <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #784
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The export_operations member ->encode_fh() has been updated to
take both the child and parent inodes. This interface used to
take the child dentry and a bool describing if the parent is needed.
NOTE: While updating this code I noticed that we do not currently
cleanly handle the case where we're passed a connectable parent.
This code should be audited to make sure we're doing the right thing.
Signed-off-by: Richard Yao <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #784
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, zpool online -e (dynamic vdev expansion) doesn't work on
whole disks because we're invoking ioctl(BLKRRPART) from userspace
while ZFS still has a partition open on the disk, which results in
EBUSY.
This patch moves the BLKRRPART invocation from the zpool utility to the
module. Specifically, this is done just before opening the device in
vdev_disk_open() which is called inside vdev_reopen(). This requires
jumping through some hoops to get to the disk device from the partition
device, and to make sure we can still open the partition after the
BLKRRPART call.
Note that this new code path is triggered on dynamic vdev expansion
only; other actions, like creating a new pool, are unchanged and still
call BLKRRPART from userspace.
This change also depends on API changes which are available in 2.6.37
and latter kernels. The build system has been updated to detect this,
but there is no compatibility mode for older kernels. This means that
online expansion will NOT be available in older kernels. However, it
will still be possible to expand the vdev offline.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #808
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
torvalds/linux@adc0e91ab142abe93f5b0d7980ada8a7676231fe introduced
introduced d_make_root() as a replacement for d_alloc_root(). Further
commits appear to have removed d_alloc_root() from the Linux source
tree. This causes the following failure:
error: implicit declaration of function 'd_alloc_root'
[-Werror=implicit-function-declaration]
To correct this we update the code to use the current d_make_root()
interface for readability. Then we introduce an autotools check
to determine if d_make_root() is available. If it isn't then we
define some compatibility logic which used the older d_alloc_root()
interface.
Signed-off-by: Richard Yao <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #776
|
|
|
|
|
|
|
|
|
|
| |
The mode argument of iops->create()/mkdir()/mknod() was changed from
an 'int' to a 'umode_t'. To prevent a compiler warning an autoconf
check was added to detect the API change and then correctly set a
zpl_umode_t typedef. There is no functional change.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #701
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow rigorous (and expensive) tx validation to be enabled/disabled
indepentantly from the standard zfs debugging. When enabled these
checks ensure that all txs are constructed properly and that a dbuf
is never dirtied without taking the correct tx hold.
This checking is particularly helpful when adding new dmu consumers
like Lustre. However, for established consumers such as the zpl
with no known outstanding tx construction problems this is just
overhead.
--enable-debug-dmu-tx - Enable/disable validation of each tx as
--disable-debug-dmu-tx it is constructed. By default validation
is disabled due to performance concerns.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for the .zfs control directory. This was accomplished
by leveraging as much of the existing ZFS infrastructure as posible
and updating it for Linux as required. The bulk of the core
functionality is now all there with the following limitations.
*) The .zfs/snapshot directory automount support requires a 2.6.37
or newer kernel. The exception is RHEL6.2 which has backported
the d_automount patches.
*) Creating/destroying/renaming snapshots with mkdir/rmdir/mv
in the .zfs/snapshot directory works as expected. However,
this functionality is only available to root until zfs
delegations are finished.
* mkdir - create a snapshot
* rmdir - destroy a snapshot
* mv - rename a snapshot
The following issues are known defeciences, but we expect them to
be addressed by future commits.
*) Add automount support for kernels older the 2.6.37. This should
be possible using follow_link() which is what Linux did before.
*) Accessing the .zfs/snapshot directory via NFS is not yet possible.
The majority of the ground work for this is complete. However,
finishing this work will require resolving some lingering
integration issues with the Linux NFS kernel server.
*) The .zfs/shares directory exists but no futher smb functionality
has yet been implemented.
Contributions-by: Rohan Puri <[email protected]>
Contributiobs-by: Andrew Barnes <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #173
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allow a source rpm to be rebuilt with debugging enabled. This
avoids the need to have to manually modify the spec file. By
default debugging is still largely disabled. To enable specific
debugging features use the following options with rpmbuild.
'--with debug' - Enables ASSERTs
# For example:
$ rpmbuild --rebuild --with debug zfs-modules-0.6.0-rc6.src.rpm
Additionally, ZFS_CONFIG has been added to zfs_config.h for
packages which build against these headers. This is critical
to ensure both zfs and the dependant package are using the same
prototype and structure definitions.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DISCARD (REQ_DISCARD, BLKDISCARD) is useful for thin provisioning.
It allows ZVOL clients to discard (unmap, trim) block ranges from
a ZVOL, thus optimizing disk space usage by allowing a ZVOL to
shrink instead of just grow.
We can't use zfs_space() or zfs_freesp() here, since these functions
only work on regular files, not volumes. Fortunately we can use the
low-level function dmu_free_long_range() which does exactly what we
want.
Currently the discard operation is not added to the log. That's not
a big deal since losing discard requests cannot result in data
corruption. It would however result in disk space usage higher than
it should be. Thus adding log support to zvol_discard() is probably
a good idea for a future improvement.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently only the (FALLOC_FL_PUNCH_HOLE) flag combination is
supported, since it's the only one that matches the behavior of
zfs_space(). This makes it pretty much useless in its current
form, but it's a start.
To support other flag combinations we would need to modify
zfs_space() to make it more flexible, or emulate the desired
functionality in zpl_fallocate().
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #334
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Linux block device queue subsystem exposes a number of configurable
settings described in Linux block/blk-settings.c. The defaults for these
settings are tuned for hard drives, and are not optimized for ZVOLs. Proper
configuration of these options would allow upper layers (I/O scheduler) to
take better decisions about write merging and ordering.
Detailed rationale:
- max_hw_sectors is set to unlimited (UINT_MAX). zvol_write() is able to
handle writes of any size, so there's no reason to impose a limit. Let the
upper layer decide.
- max_segments and max_segment_size are set to unlimited. zvol_write() will
copy the requests' contents into a dbuf anyway, so the number and size of
the segments are irrelevant. Let the upper layer decide.
- physical_block_size and io_opt are set to the ZVOL's block size. This
has the potential to somewhat alleviate issue #361 for ZVOLs, by warning
the upper layers that writes smaller than the volume's block size will be
slow.
- The NONROT flag is set to indicate this isn't a rotational device.
Although the backing zpool might be composed of rotational devices, the
resulting ZVOL often doesn't exhibit the same behavior due to the COW
mechanisms used by ZFS. Setting this flag will prevent upper layers from
making useless decisions (such as reordering writes) based on incorrect
assumptions about the behavior of the ZVOL.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
zvol_write() assumes that the write request must be written to stable storage
if rq_is_sync() is true. Unfortunately, this assumption is incorrect. Indeed,
"sync" does *not* mean what we think it means in the context of the Linux
block layer. This is well explained in linux/fs.h:
WRITE: A normal async write. Device will be plugged.
WRITE_SYNC: Synchronous write. Identical to WRITE, but passes down
the hint that someone will be waiting on this IO
shortly.
WRITE_FLUSH: Like WRITE_SYNC but with preceding cache flush.
WRITE_FUA: Like WRITE_SYNC but data is guaranteed to be on
non-volatile media on completion.
In other words, SYNC does not *mean* that the write must be on stable storage
on completion. It just means that someone is waiting on us to complete the
write request. Thus triggering a ZIL commit for each SYNC write request on a
ZVOL is unnecessary and harmful for performance. To make matters worse, ZVOL
users have no way to express that they actually want data to be written to
stable storage, which means the ZIL is broken for ZVOLs.
The request for stable storage is expressed by the FUA flag, so we must
commit the ZIL after the write if the FUA flag is set. In addition, we must
commit the ZIL before the write if the FLUSH flag is set.
Also, we must inform the block layer that we actually support FLUSH and FUA.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The second argument of sops->show_options() was changed from a
'struct vfsmount *' to a 'struct dentry *'. Add an autoconf check
to detect the API change and then conditionally define the expected
interface. In either case we are only interested in the zfs_sb_t.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #549
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Linux 3.1 kernel has introduced the concept of per-filesystem
shrinkers which are directly assoicated with a super block. Prior
to this change there was one shared global shrinker.
The zfs code relied on being able to call the global shrinker when
the arc_meta_limit was exceeded. This would cause the VFS to drop
references on a fraction of the dentries in the dcache. The ARC
could then safely reclaim the memory used by these entries and
honor the arc_meta_limit. Unfortunately, when per-filesystem
shrinkers were added the old interfaces were made unavailable.
This change adds support to use the new per-filesystem shrinker
interface so we can continue to honor the arc_meta_limit. The
major benefit of the new interface is that we can now target
only the zfs filesystem for dentry and inode pruning. Thus we
can minimize any impact on the caching of other filesystems.
In the context of making this change several other important
issues related to managing the ARC were addressed, they include:
* The dnlc_reduce_cache() function which was called by the ARC
to drop dentries for the Posix layer was replaced with a generic
zfs_prune_t callback. The ZPL layer now registers a callback to
drop these dentries removing a layering violation which dates
back to the Solaris code. This callback can also be used by
other ARC consumers such as Lustre.
arc_add_prune_callback()
arc_remove_prune_callback()
* The arc_reduce_dnlc_percent module option has been changed to
arc_meta_prune for clarity. The dnlc functions are specific to
Solaris's VFS and have already been largely eliminated already.
The replacement tunable now represents the number of bytes the
prune callback will request when invoked.
* Less aggressively invoke the prune callback. We used to call
this whenever we exceeded the arc_meta_limit however that's not
strictly correct since it results in over zeleous reclaim of
dentries and inodes. It is now only called once the arc_meta_limit
is exceeded and every effort has been made to evict other data from
the ARC cache.
* More promptly manage exceeding the arc_meta_limit. When reading
meta data in to the cache if a buffer was unable to be recycled
notify the arc_reclaim thread to invoke the required prune.
* Added arcstat_prune kstat which is incremented when the ARC
is forced to request that a consumer prune its cache. Remember
this will only occur when the ARC has no other choice. If it
can evict buffers safely without invoking the prune callback
it will.
* This change is also expected to resolve the unexpect collapses
of the ARC cache. This would occur because when exceeded just the
arc_meta_limit reclaim presure would be excerted on the arc_c
value via arc_shrink(). This effectively shrunk the entire cache
when really we just needed to reclaim meta data.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #466
Closes #292
|
|
|
|
|
|
|
|
|
|
|
| |
Directly changing inode->i_nlink is deprecated in Linux 3.2 by commit
SHA: bfe8684869601dacfcb2cd69ef8cfd9045f62170
Use the new set_nlink() kernel function instead.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes: #462
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added the necessary build infrastructure for building packages
compatible with the Arch Linux distribution. As such, one can now run:
$ ./configure
$ make pkg # Alternatively, one can run 'make arch' as well
on the Arch Linux machine to create two binary packages compatible with
the pacman package manager, one for the zfs userland utilities and
another for the zfs kernel modules. The new packages can then be
installed by running:
# pacman -U $package.pkg.tar.xz
In addition, source-only packages suitable for an Arch Linux chroot
environment or remote builder can also be build using the 'sarch' make
rule.
NOTE: Since the source dist tarball is created on the fly from the head
of the build tree, it's MD5 hash signature will be continually influx.
As a result, the md5sum variable was intentionally omitted from the
PKGBUILD files, and the '--skipinteg' makepkg option is used. This may
or may not have any serious security implications, as the source tarball
is not being downloaded from an outside source.
Signed-off-by: Prakash Surya <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #491
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update the code to use the bdi_setup_and_register() helper to
simplify the bdi integration code. The updated code now just
registers the bdi during mount and destroys it during unmount.
The only complication is that for 2.6.32 - 2.6.33 kernels the
helper wasn't available so in these cases the zfs code must
provide it. Luckily the bdi_setup_and_register() function
is trivial.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #367
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Relying on an /etc/hostid file which is installed in the system
image breaks diskless systems which share an image. Certain
cluster infrastructure such as MPI relies on all nodes having
a unique hostid. However, we still must be careful to ensure
the hostid is syncronized between the initramfs and system
images when using zfs root filesystems.
To accompish this the automatically created /etc/hostid file has
been removed from the spl rpm packaging. The /etc/hostid file
is now dynamically created for your initramfs as part of the
dracut install process. This avoids the need to install it in
the actual system images.
This change also resolves the spl_hostid parameter handling
for dracut.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #398
Closes #399
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The == operator is specific to bash, replace it with the more
correct = operator for sh. This bug can prevent correct booting
when using a zfs root pool.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #416
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mount-zfs.sh script incorrectly parsed results from zpool list. Correct
bootfs attribute was only found on systems with a single pool or where
the bootable pool's name alphabetized to before all other pool names.
Boot failed when the bootable pool's name came after other pools
(IE 'rpool' and 'mypool' would fail to find bootfs on rpool.)
Patch correctly discards pools whose bootfs attribute is blank ('-').
Signed-off-by: Zachary Bedell <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #402
|
|
|
|
|
|
|
|
|
|
|
| |
It seems that dracut version 009 through 013 won't boot correctly when
the zfs-dracut rpm package has been installed, but 'root=zfs' isn't
used on the boot commandline, for example when the package has been
installed on a system that _doesn't_ boot from a zfs filesystem.
Signed-off-by: Jeremy Gill <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #377
|
|
|
|
|
|
|
|
| |
This ensures that module-setup.sh script will always be able to
install the required dracut components regardless of how the zfs
package was configured.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This rule does not need to be dracut specific. Automatically loading
the zfs module stack when a zfs device is detected is usually desirable.
My only concern is that this might cause trouble for large pools where
we don't want to automatically import the pool until all the disks are
available. However, we'll cross that bridge when we come to it.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
|
| |
Run autogen.sh using the same autotools versions as upstream:
* autoconf-2.63
* automake-1.11.1
* libtool-2.2.6b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change moves the default install location for the zfs udev
rules from /etc/udev/ to /lib/udev/. The correct convention is
for rules provided by a package to be installed in /lib/udev/.
The /etc/udev/ directory is reserved for custom rules or local
overrides.
Additionally, this patch cleans up some abuse of the bindir install
location by adding a udevdir and udevruledir install directories.
This allows us to revert to the default bin install location. The
udev install directories can be set with the following new options.
--with-udevdir=DIR install udev helpers [EPREFIX/lib/udev]
--with-udevruledir=DIR install udev rules [UDEVDIR/rules.d]
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #356
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For a long time now the kernel has been moving away from using the
pdflush daemon to write 'old' dirty pages to disk. The primary reason
for this is because the pdflush daemon is single threaded and can be
a limiting factor for performance. Since pdflush sequentially walks
the dirty inode list for each super block any delay in processing can
slow down dirty page writeback for all filesystems.
The replacement for pdflush is called bdi (backing device info). The
bdi system involves creating a per-filesystem control structure each
with its own private sets of queues to manage writeback. The advantage
is greater parallelism which improves performance and prevents a single
filesystem from slowing writeback to the others.
For a long time both systems co-existed in the kernel so it wasn't
strictly required to implement the bdi scheme. However, as of
Linux 2.6.36 kernels the pdflush functionality has been retired.
Since ZFS already bypasses the page cache for most I/O this is only
an issue for mmap(2) writes which must go through the page cache.
Even then adding this missing support for newer kernels was overlooked
because there are other mechanisms which can trigger writeback.
However, there is one critical case where not implementing the bdi
functionality can cause problems. If an application handles a page
fault it can enter the balance_dirty_pages() callpath. This will
result in the application hanging until the number of dirty pages in
the system drops below the dirty ratio.
Without a registered backing_device_info for the filesystem the
dirty pages will not get written out. Thus the application will hang.
As mentioned above this was less of an issue with older kernels because
pdflush would eventually write out the dirty pages.
This change adds a backing_device_info structure to the zfs_sb_t
which is already allocated per-super block. It is then registered
when the filesystem mounted and unregistered on unmount. It will
not be registered for mounted snapshots which are read-only. This
change will result in flush-<pool> thread being dynamically created
and destroyed per-mounted filesystem for writeback.
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #174
|
|
|
|
|
| |
Signed-off-by: Brian Behlendorf <[email protected]>
Closes #347
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a bug that can effect first reboot after install
using Dracut. The Dracut module didn't check the return
value from several calls to z* functions. This resulted in
"Using no pools available as root" on boot if the ZFS module
didn't auto-import the pools. It's most likely to happen on
initial restart after a fresh install & requires juggling in
the Dracut emergency holographic shell to fix.
This patch checks return codes & output from zpool list and
related functions and correctly falls into the explicit zpool
import code branch if the module didn't import the pool at load.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
| |
For consistency with the upstream sources and the rest of the
project use hard instead of soft tabs.
Signed-off-by: Brian Behlendorf <[email protected]>
|
|
|
|
|
|
|
| |
The bootfs example in the dracut documentation was sightly incorrect
because it lacked the trailing required pool argument. Add it.
Signed-off-by: Brian Behlendorf <[email protected]>
|