'\" te .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. .\" Copyright 2011 Nexenta Systems, Inc. All rights reserved. .\" Copyright (c) 2013 by Delphix. All rights reserved. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. .\" The contents of this file are subject to the terms of the Common Development .\" and Distribution License (the "License"). You may not use this file except .\" in compliance with the License. You can obtain a copy of the license at .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. .\" .\" See the License for the specific language governing permissions and .\" limitations under the License. When distributing Covered Code, include this .\" CDDL HEADER in each file and include the License file at .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your .\" own identifying information: .\" Portions Copyright [yyyy] [name of copyright owner] .TH zpool 8 "May 11, 2016" "ZFS pool 28, filesystem 5" "System Administration Commands" .SH NAME zpool \- configures ZFS storage pools .SH SYNOPSIS .LP .nf \fBzpool\fR [\fB-?\fR] .fi .LP .nf \fBzpool add\fR [\fB-fgLnP\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fIvdev\fR ... .fi .LP .nf \fBzpool attach\fR [\fB-f\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR .fi .LP .nf \fBzpool clear\fR \fIpool\fR [\fIdevice\fR] .fi .LP .nf \fBzpool create\fR [\fB-fnd\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-o\fR feature@\fIfeature=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR] ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR \fIroot\fR] ... [\fB-t\fR \fItname\fR] \fIpool\fR \fIvdev\fR ... .fi .LP .nf \fBzpool destroy\fR [\fB-f\fR] \fIpool\fR .fi .LP .nf \fBzpool detach\fR \fIpool\fR \fIdevice\fR .fi .LP .nf \fBzpool events\fR [\fB-vHfc\fR] [\fIpool\fR] ... .fi .LP .nf \fBzpool export\fR [\fB-a\fR] [\fB-f\fR] \fIpool\fR ... .fi .LP .nf \fBzpool get\fR [\fB-Hp\fR] [\fB-o \fR\fIfield\fR[,...]] "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ... .fi .LP .nf \fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ... .fi .LP .nf \fBzpool import\fR [\fB-d\fR \fIdir\fR] [\fB-D\fR] .fi .LP .nf \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR] [\fB-f\fR] [\fB-m\fR] [\fB-N\fR] [\fB-R\fR \fIroot\fR] [\fB-F\fR [\fB-n\fR] [\fB-X\fR\] [\fB-T\fR\]] [\fB-s\fR] \fB-a\fR .fi .LP .nf \fBzpool import\fR [\fB-o \fImntopts\fR\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR] [\fB-f\fR] [\fB-m\fR] [\fB-R\fR \fIroot\fR] [\fB-F\fR [\fB-n\fR] [\fB-X\fR] [\fB-T\fR\]] [\fB-t\fR]] [\fB-s\fR] \fIpool\fR | \fIid\fR [\fInewpool\fR] .fi .LP .nf \fB\fBzpool iostat\fR [[[\fB-c\fR \fBSCRIPT\fR] [\fB-lq\fR]] | \fB-rw\fR] [\fB-T\fR \fBd\fR | \fBu\fR] [\fB-ghHLpPvy\fR] [[\fIpool\fR ...]|[\fIpool vdev\fR ...]|[\fIvdev\fR ...]] [\fIinterval\fR[\fIcount\fR]]\fR .fi .LP .nf \fBzpool labelclear\fR [\fB-f\fR] \fIdevice\fR .fi .LP .nf \fBzpool list\fR [\fB-T\fR d | u ] [\fB-HgLpPv\fR] [\fB-o\fR \fIproperty\fR[,...]] [\fIpool\fR] ... [\fIinterval\fR[\fIcount\fR]] .fi .LP .nf \fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ... .fi .LP .nf \fBzpool online\fR \fIpool\fR \fIdevice\fR ... .fi .LP .nf \fBzpool reguid\fR \fIpool\fR .fi .LP .nf \fBzpool reopen\fR \fIpool\fR .fi .LP .nf \fBzpool remove\fR \fIpool\fR \fIdevice\fR ... .fi .LP .nf \fBzpool replace\fR [\fB-f\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fIdevice\fR [\fInew_device\fR] .fi .LP .nf \fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ... .fi .LP .nf \fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR .fi .LP .nf \fBzpool split\fR [\fB-gLnP\fR] [\fB-R\fR \fIaltroot\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fInewpool\fR [\fIdevice\fR ...] .fi .LP .nf \fBzpool status\fR [\fB-c\fR \fBSCRIPT\fR] [\fB-gLPvxD\fR] [\fB-T\fR d | u] [\fIpool\fR] ... [\fIinterval\fR [\fIcount\fR]] .fi .LP .nf \fBzpool upgrade\fR .fi .LP .nf \fBzpool upgrade\fR \fB-v\fR .fi .LP .nf \fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ... .fi .SH DESCRIPTION .sp .LP The \fBzpool\fR command configures \fBZFS\fR storage pools. A storage pool is a collection of devices that provides physical storage and data replication for \fBZFS\fR datasets. .sp .LP All datasets within a storage pool share the same space. See \fBzfs\fR(8) for information on managing datasets. .SS "Virtual Devices (vdevs)" .sp .LP A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported: .sp .ne 2 .na \fB\fBdisk\fR\fR .ad .RS 10n A block device, typically located under \fB/dev\fR. \fBZFS\fR can use individual partitions, though the recommended mode of operation is to use whole disks. A disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under "/dev"). For example, "sda" is equivalent to "/dev/sda". A whole disk can be specified by omitting the partition designation. When given a whole disk, \fBZFS\fR automatically labels the disk, if necessary. .RE .sp .ne 2 .na \fB\fBfile\fR\fR .ad .RS 10n A regular file. The use of files as a backing store is strongly discouraged. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. A file must be specified by a full path. .RE .sp .ne 2 .na \fB\fBmirror\fR\fR .ad .RS 10n A mirror of two or more devices. Data is replicated in an identical fashion across all components of a mirror. A mirror with \fIN\fR disks of size \fIX\fR can hold \fIX\fR bytes and can withstand (\fIN-1\fR) devices failing before data integrity is compromised. .RE .sp .ne 2 .na \fB\fBraidz\fR\fR .ad .br .na \fB\fBraidz1\fR\fR .ad .br .na \fB\fBraidz2\fR\fR .ad .br .na \fB\fBraidz3\fR\fR .ad .RS 10n A variation on \fBRAID-5\fR that allows for better distribution of parity and eliminates the "\fBRAID-5\fR write hole" (in which data and parity become inconsistent after a power loss). Data and parity is striped across all disks within a \fBraidz\fR group. .sp A \fBraidz\fR group can have single-, double- , or triple parity, meaning that the \fBraidz\fR group can sustain one, two, or three failures, respectively, without losing any data. The \fBraidz1\fR \fBvdev\fR type specifies a single-parity \fBraidz\fR group; the \fBraidz2\fR \fBvdev\fR type specifies a double-parity \fBraidz\fR group; and the \fBraidz3\fR \fBvdev\fR type specifies a triple-parity \fBraidz\fR group. The \fBraidz\fR \fBvdev\fR type is an alias for \fBraidz1\fR. .sp A \fBraidz\fR group with \fIN\fR disks of size \fIX\fR with \fIP\fR parity disks can hold approximately (\fIN-P\fR)*\fIX\fR bytes and can withstand \fIP\fR device(s) failing before data integrity is compromised. The minimum number of devices in a \fBraidz\fR group is one more than the number of parity disks. The recommended number is between 3 and 9 to help increase performance. .RE .sp .ne 2 .na \fB\fBspare\fR\fR .ad .RS 10n A special pseudo-\fBvdev\fR which keeps track of available hot spares for a pool. For more information, see the "Hot Spares" section. .RE .sp .ne 2 .na \fB\fBlog\fR\fR .ad .RS 10n A separate-intent log device. If more than one log device is specified, then writes are load-balanced between devices. Log devices can be mirrored. However, \fBraidz\fR \fBvdev\fR types are not supported for the intent log. For more information, see the "Intent Log" section. .RE .sp .ne 2 .na \fB\fBcache\fR\fR .ad .RS 10n A device used to cache storage pool data. A cache device cannot be configured as a mirror or \fBraidz\fR group. For more information, see the "Cache Devices" section. .RE .sp .LP Virtual devices cannot be nested, so a mirror or \fBraidz\fR virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed. .sp .LP A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, \fBZFS\fR automatically places data on the newly available devices. .sp .LP Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords "mirror" and "raidz" are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks: .sp .in +2 .nf # \fBzpool create mypool mirror sda sdb mirror sdc sdd\fR .fi .in -2 .sp .SS "Device Failure and Recovery" .sp .LP \fBZFS\fR supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and \fBZFS\fR automatically repairs bad data from a good copy when corruption is detected. .sp .LP In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or \fBraidz\fR groups. While \fBZFS\fR supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable. .sp .LP A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning. .sp .LP The health of the top-level vdev, such as mirror or \fBraidz\fR device, is potentially impacted by the state of its associated vdevs, or component devices. A top-level vdev or component device is in one of the following states: .sp .ne 2 .na \fB\fBDEGRADED\fR\fR .ad .RS 12n One or more top-level vdevs is in the degraded state because one or more component devices are offline. Sufficient replicas exist to continue functioning. .sp One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning. The underlying conditions are as follows: .RS +4 .TP .ie t \(bu .el o The number of checksum errors exceeds acceptable levels and the device is degraded as an indication that something may be wrong. \fBZFS\fR continues to use the device as necessary. .RE .RS +4 .TP .ie t \(bu .el o The number of I/O errors exceeds acceptable levels. The device could not be marked as faulted because there are insufficient replicas to continue functioning. .RE .RE .sp .ne 2 .na \fB\fBFAULTED\fR\fR .ad .RS 12n One or more top-level vdevs is in the faulted state because one or more component devices are offline. Insufficient replicas exist to continue functioning. .sp One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning. The underlying conditions are as follows: .RS +4 .TP .ie t \(bu .el o The device could be opened, but the contents did not match expected values. .RE .RS +4 .TP .ie t \(bu .el o The number of I/O errors exceeds acceptable levels and the device is faulted to prevent further use of the device. .RE .RE .sp .ne 2 .na \fB\fBOFFLINE\fR\fR .ad .RS 12n The device was explicitly taken offline by the "\fBzpool offline\fR" command. .RE .sp .ne 2 .na \fB\fBONLINE\fR\fR .ad .RS 12n The device is online and functioning. .RE .sp .ne 2 .na \fB\fBREMOVED\fR\fR .ad .RS 12n The device was physically removed while the system was running. Device removal detection is hardware-dependent and may not be supported on all platforms. .RE .sp .ne 2 .na \fB\fBUNAVAIL\fR\fR .ad .RS 12n The device could not be opened. If a pool is imported when a device was unavailable, then the device will be identified by a unique identifier instead of its path since the path was never correct in the first place. .RE .sp .LP If a device is removed and later re-attached to the system, \fBZFS\fR attempts to put the device online automatically. Device attach detection is hardware-dependent and might not be supported on all platforms. .SS "Hot Spares" .sp .LP \fBZFS\fR allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" \fBvdev\fR with any number of devices. For example, .sp .in +2 .nf # zpool create pool mirror sda sdb spare sdc sdd .fi .in -2 .sp .sp .LP Spares can be shared across multiple pools, and can be added with the "\fBzpool add\fR" command and removed with the "\fBzpool remove\fR" command. Once a spare replacement is initiated, a new "spare" \fBvdev\fR is created within the configuration that will remain there until the original device is replaced. At this point, the hot spare becomes available again. .sp .LP If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data corruption. .sp .LP An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools. .sp .LP Spares cannot replace log devices. .SS "Intent Log" .sp .LP The \fBZFS\fR Intent Log (\fBZIL\fR) satisfies \fBPOSIX\fR requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage devices when returning from a system call. \fBNFS\fR and other applications can also use \fBfsync\fR() to ensure data stability. By default, the intent log is allocated from blocks within the main pool. However, it might be possible to get better performance using separate intent log devices such as \fBNVRAM\fR or a dedicated disk. For example: .sp .in +2 .nf \fB# zpool create pool sda sdb log sdc\fR .fi .in -2 .sp .sp .LP Multiple log devices can also be specified, and they can be mirrored. See the EXAMPLES section for an example of mirroring multiple log devices. .sp .LP Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool. Mirrored log devices can be removed by specifying the top-level mirror for the log. .SS "Cache Devices" .sp .LP Devices can be added to a storage pool as "cache devices." These devices provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content. .sp .LP To create a pool with cache devices, specify a "cache" \fBvdev\fR with any number of devices. For example: .sp .in +2 .nf \fB# zpool create pool sda sdb cache sdc sdd\fR .fi .in -2 .sp .sp .LP Cache devices cannot be mirrored or part of a \fBraidz\fR configuration. If a read error is encountered on a cache device, that read \fBI/O\fR is reissued to the original storage pool device, which might be part of a mirrored or \fBraidz\fR configuration. .sp .LP The content of the cache devices is considered volatile, as is the case with other system caches. .SS "Properties" .sp .LP Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool. The following are read-only properties: .sp .ne 2 .na \fB\fBavailable\fR\fR .ad .RS 20n Amount of storage available within the pool. This property can also be referred to by its shortened column name, "avail". .RE .sp .ne 2 .na \fB\fBcapacity\fR\fR .ad .RS 20n Percentage of pool space used. This property can also be referred to by its shortened column name, "cap". .RE .sp .ne 2 .na \fB\fBexpandsize\fR\fR .ad .RS 20n Amount of uninitialized space within the pool or device that can be used to increase the total capacity of the pool. Uninitialized space consists of any space on an EFI labeled vdev which has not been brought online (i.e. zpool online -e). This space occurs when a LUN is dynamically expanded. .RE .sp .ne 2 .na \fB\fBfragmentation\fR\fR .ad .RS 20n The amount of fragmentation in the pool. .RE .sp .ne 2 .na \fB\fBfree\fR\fR .ad .RS 20n The amount of free space available in the pool. .RE .sp .ne 2 .na \fB\fBfreeing\fR\fR .ad .RS 20n After a file system or snapshot is destroyed, the space it was using is returned to the pool asynchronously. \fB\fBfreeing\fR\fR is the amount of space remaining to be reclaimed. Over time \fB\fBfreeing\fR\fR will decrease while \fB\fBfree\fR\fR increases. .RE .sp .ne 2 .na \fB\fBhealth\fR\fR .ad .RS 20n The current health of the pool. Health can be "\fBONLINE\fR", "\fBDEGRADED\fR", "\fBFAULTED\fR", " \fBOFFLINE\fR", "\fBREMOVED\fR", or "\fBUNAVAIL\fR". .RE .sp .ne 2 .na \fB\fBguid\fR\fR .ad .RS 20n A unique identifier for the pool. .RE .sp .ne 2 .na \fB\fBsize\fR\fR .ad .RS 20n Total size of the storage pool. .RE .sp .ne 2 .na \fB\fBunsupported@\fR\fIfeature_guid\fR\fR .ad .RS 20n .sp Information about unsupported features that are enabled on the pool. See \fBzpool-features\fR(5) for details. .RE .sp .ne 2 .na \fB\fBused\fR\fR .ad .RS 20n Amount of storage space used within the pool. .RE .sp .LP The space usage properties report actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a \fBraidz\fR configuration depends on the characteristics of the data being written. In addition, \fBZFS\fR reserves some space for internal accounting that the \fBzfs\fR(8) command takes into account, but the \fBzpool\fR command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable. .sp .LP The following property can be set at creation time: .sp .ne 2 .na \fB\fBashift\fR=\fIashift\fR\fR .ad .sp .6 .RS 4n Pool sector size exponent, to the power of 2 (internally referred to as "ashift"). Values from 9 to 13, inclusive, are valid; also, the special value 0 (the default) means to auto-detect using the kernel's block layer and a ZFS internal exception list. I/O operations will be aligned to the specified size boundaries. Additionally, the minimum (disk) write size will be set to the specified size, so this represents a space vs. performance trade-off. The typical case for setting this property is when performance is important and the underlying disks use 4KiB sectors but report 512B sectors to the OS (for compatibility reasons); in that case, set \fBashift=12\fR (which is 1<<12 = 4096). .LP For optimal performance, the pool sector size should be greater than or equal to the sector size of the underlying disks. Since the property cannot be changed after pool creation, if in a given pool, you \fIever\fR want to use drives that \fIreport\fR 4KiB sectors, you must set \fBashift=12\fR at pool creation time. .LP Keep in mind is that the \fBashift\fR is \fIvdev\fR specific and is not a \fIpool\fR global. This means that when adding new vdevs to an existing pool you may need to specify the \fBashift\fR. .RE .sp .LP The following property can be set at creation time and import time: .sp .ne 2 .na \fB\fBaltroot\fR=(unset) | \fIpath\fR\fR .ad .sp .6 .RS 4n Alternate root directory. If set, this directory is prepended to any mount points within the pool. This can be used when examining an unknown pool where the mount points cannot be trusted, or in an alternate boot environment, where the typical paths are not valid. \fBaltroot\fR is not a persistent property. It is valid only while the system is up. Setting \fBaltroot\fR defaults to using \fBcachefile\fR=none, though this may be overridden using an explicit setting. .RE .sp .LP The following property can only be set at import time: .sp .ne 2 .na \fB\fBreadonly\fR=\fBoff\fR | \fBon\fR\fR .ad .sp .6 .RS 4n If set to \fBon\fR, the pool will be imported in read-only mode: Synchronous data in the intent log will not be accessible, properties of the pool can not be changed and datasets of the pool can only be mounted read-only. The \fBreadonly\fR property of its datasets will be implicitly set to \fBon\fR. It can also be specified by its column name of \fBrdonly\fR. To write to a read-only pool, a export and import of the pool is required. .RE .sp .LP The following properties can be set at creation time and import time, and later changed with the \fBzpool set\fR command: .sp .ne 2 .na \fB\fBautoexpand\fR=\fBoff\fR | \fBon\fR\fR .ad .sp .6 .RS 4n Controls automatic pool expansion when the underlying LUN is grown. If set to \fBon\fR, the pool will be resized according to the size of the expanded device. If the device is part of a mirror or \fBraidz\fR then all devices within that mirror/\fBraidz\fR group must be expanded before the new space is made available to the pool. The default behavior is \fBoff\fR. This property can also be referred to by its shortened column name, \fBexpand\fR. .RE .sp .ne 2 .na \fB\fBautoreplace\fR=\fBoff\fR | \fBon\fR\fR .ad .sp .6 .RS 4n Controls automatic device replacement. If set to "\fBoff\fR", device replacement must be initiated by the administrator by using the "\fBzpool replace\fR" command. If set to "\fBon\fR", any new device, found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is "\fBoff\fR". This property can also be referred to by its shortened column name, "replace". Autoreplace can also be used with virtual disks (like device mapper) provided that you use the /dev/disk/by-vdev paths setup by vdev_id.conf. See the vdev_id.conf man page for more details. Autoreplace and autoonline require libudev to be present at build time. If you're using device mapper disks, you must have libdevmapper installed at build time as well. .RE .sp .ne 2 .na \fB\fBbootfs\fR=(unset) | \fIpool\fR/\fIdataset\fR\fR .ad .sp .6 .RS 4n Identifies the default bootable dataset for the root pool. This property is expected to be set mainly by the installation and upgrade programs. Not all Linux distribution boot processes use the \fBbootfs\fR property. .RE .sp .ne 2 .na \fB\fBcachefile\fR=\fBnone\fR | \fIpath\fR\fR .ad .sp .6 .RS 4n Controls the location of where the pool configuration is cached. Discovering all pools on system startup requires a cached copy of the configuration data that is stored on the root file system. All pools in this cache are automatically imported when the system boots. Some environments, such as install and clustering, need to cache this information in a different location so that pools are not automatically imported. Setting this property caches the pool configuration in a different location that can later be imported with "\fBzpool import -c\fR". Setting it to the special value "\fBnone\fR" creates a temporary pool that is never cached, and the special value \fB\&''\fR (empty string) uses the default location. .sp Multiple pools can share the same cache file. Because the kernel destroys and recreates this file when pools are added and removed, care should be taken when attempting to access this file. When the last pool using a \fBcachefile\fR is exported or destroyed, the file is removed. .RE .sp .ne 2 .na \fB\fBcomment\fR=(unset) | \fB\fItext\fR\fR .ad .sp .6 .RS 4n A text string consisting of printable ASCII characters that will be stored such that it is available even if the pool becomes faulted. An administrator can provide additional information about a pool using this property. .RE .sp .ne 2 .na \fB\fBdedupditto\fR=\fB\fInumber\fR\fR .ad .sp .6 .RS 4n Threshold for the number of block ditto copies. If the reference count for a deduplicated block increases above this number, a new ditto copy of this block is automatically stored. The default setting is 0 which causes no ditto copies to be created for deduplicated blocks. The minimum valid nonzero setting is 100. .RE .sp .ne 2 .na \fB\fBdelegation\fR=\fBon\fR | \fBoff\fR\fR .ad .sp .6 .RS 4n Controls whether a non-privileged user is granted access based on the dataset permissions defined on the dataset. See \fBzfs\fR(8) for more information on \fBZFS\fR delegated administration. .RE .sp .ne 2 .na \fB\fBfailmode\fR=\fBwait\fR | \fBcontinue\fR | \fBpanic\fR\fR .ad .sp .6 .RS 4n Controls the system behavior in the event of catastrophic pool failure. This condition is typically a result of a loss of connectivity to the underlying storage device(s) or a failure of all devices within the pool. The behavior of such an event is determined as follows: .sp .ne 2 .na \fB\fBwait\fR\fR .ad .RS 12n Blocks all \fBI/O\fR access until the device connectivity is recovered and the errors are cleared. This is the default behavior. .RE .sp .ne 2 .na \fB\fBcontinue\fR\fR .ad .RS 12n Returns \fBEIO\fR to any new write \fBI/O\fR requests but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk would be blocked. .RE .sp .ne 2 .na \fB\fBpanic\fR\fR .ad .RS 12n Prints out a message to the console and generates a system crash dump. .RE .RE .sp .ne 2 .na \fB\fBfeature@\fR\fIfeature_name\fR=\fBenabled\fR\fR .ad .RS 4n The value of this property is the current state of \fIfeature_name\fR. The only valid value when setting this property is \fBenabled\fR which moves \fIfeature_name\fR to the enabled state. See \fBzpool-features\fR(5) for details on feature states. .RE .sp .ne 2 .na \fB\fBlistsnapshots\fR=on | off\fR .ad .sp .6 .RS 4n Controls whether information about snapshots associated with this pool is output when "\fBzfs list\fR" is run without the \fB-t\fR option. The default value is "off". .sp This property can also be referred to by its shortened name, \fBlistsnaps\fR. .RE .sp .ne 2 .na \fB\fBversion\fR=(unset) | \fIversion\fR\fR .ad .sp .6 .RS 4n The current on-disk version of the pool. This can be increased, but never decreased. The preferred method of updating pools is with the "\fBzpool upgrade\fR" command, though this property can be used when a specific version is needed for backwards compatibility. Once feature flags are enabled on a pool this property will no longer have a value. .RE .SS "Subcommands" .sp .LP All subcommands that modify state are logged persistently to the pool in their original form. .sp .LP The \fBzpool\fR command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported: .sp .ne 2 .na \fB\fBzpool\fR \fB-?\fR\fR .ad .sp .6 .RS 4n Displays a help message. .RE .sp .ne 2 .na \fB\fBzpool add\fR [\fB-fgLnP\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fIvdev\fR ...\fR .ad .sp .6 .RS 4n Adds the specified virtual devices to the given pool. The \fIvdev\fR specification is described in the "Virtual Devices" section. The behavior of the \fB-f\fR option, and the device checks performed are described in the "zpool create" subcommand. .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 6n Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner. .RE .sp .ne 2 .na \fB\fB-g\fR\fR .ad .RS 6n Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands. .RE .sp .ne 2 .na \fB\fB-L\fR\fR .ad .RS 6n Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it. .RE .sp .ne 2 .na \fB\fB-n\fR\fR .ad .RS 6n Displays the configuration that would be used without actually adding the \fBvdev\fRs. The actual pool creation can still fail due to insufficient privileges or device sharing. .RE .sp .ne 2 .na \fB\fB-P\fR\fR .ad .RS 6n Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the \fB-L\fR flag. .RE .sp .ne 2 .na \fB\fB-o\fR \fIproperty=value\fR .ad .sp .6 .RS 4n Sets the given pool properties. See the "Properties" section for a list of valid properties that can be set. The only property supported at the moment is \fBashift\fR. \fBDo note\fR that some properties (among them \fBashift\fR) are \fInot\fR inherited from a previous vdev. They are vdev specific, not pool specific. .RE Do not add a disk that is currently configured as a quorum device to a zpool. After a disk is in the pool, that disk can then be configured as a quorum device. .RE .sp .ne 2 .na \fB\fBzpool attach\fR [\fB-f\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fIdevice\fR \fInew_device\fR\fR .ad .sp .6 .RS 4n Attaches \fInew_device\fR to an existing \fBzpool\fR device. The existing device cannot be part of a \fBraidz\fR configuration. If \fIdevice\fR is not currently part of a mirrored configuration, \fIdevice\fR automatically transforms into a two-way mirror of \fIdevice\fR and \fInew_device\fR. If \fIdevice\fR is part of a two-way mirror, attaching \fInew_device\fR creates a three-way mirror, and so on. In either case, \fInew_device\fR begins to resilver immediately. .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 6n Forces use of \fInew_device\fR, even if its appears to be in use. Not all devices can be overridden in this manner. .RE .sp .ne 2 .na \fB\fB-o\fR \fIproperty=value\fR .ad .sp .6 .RS 4n Sets the given pool properties. See the "Properties" section for a list of valid properties that can be set. The only property supported at the moment is "ashift". .RE .RE .sp .ne 2 .na \fB\fBzpool clear\fR \fIpool\fR [\fIdevice\fR] ...\fR .ad .sp .6 .RS 4n Clears device errors in a pool. If no arguments are specified, all device errors within the pool are cleared. If one or more devices is specified, only those errors associated with the specified device or devices are cleared. .RE .sp .ne 2 .na \fB\fBzpool create\fR [\fB-fnd\fR] [\fB-o\fR \fIproperty=value\fR] ... [\fB-o\fR feature@\fIfeature=value\fR] ... [\fB-O\fR \fIfile-system-property=value\fR] ... [\fB-m\fR \fImountpoint\fR] [\fB-R\fR \fIroot\fR] [\fB-t\fR \fItname\fR] \fIpool\fR \fIvdev\fR ...\fR .ad .sp .6 .RS 4n Creates a new storage pool containing the virtual devices specified on the command line. The pool name must begin with a letter, and can only contain alphanumeric characters as well as underscore ("_"), dash ("-"), period ("."), colon (":"), and space (" "). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are names beginning with the pattern "c[0-9]". The \fBvdev\fR specification is described in the "Virtual Devices" section. .sp The command verifies that each device specified is accessible and not currently in use by another subsystem. There are some uses, such as being currently mounted, or specified as the dedicated dump device, that prevents a device from ever being used by \fBZFS\fR. Other uses, such as having a preexisting \fBUFS\fR file system, can be overridden with the \fB-f\fR option. .sp The command also checks that the replication strategy for the pool is consistent. An attempt to combine redundant and non-redundant storage in a single pool, or to mix disks and files, results in an error unless \fB-f\fR is specified. The use of differently sized devices within a single \fBraidz\fR or mirror group is also flagged as an error unless \fB-f\fR is specified. .sp Unless the \fB-R\fR option is specified, the default mount point is "/\fIpool\fR". The mount point must not exist or must be empty, or else the root dataset cannot be mounted. This can be overridden with the \fB-m\fR option. .sp By default all supported features are enabled on the new pool unless the \fB-d\fR option is specified. .sp .ne 2 .na \fB\fB-f\fR\fR .ad .sp .6 .RS 4n Forces use of \fBvdev\fRs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner. .RE .sp .ne 2 .na \fB\fB-n\fR\fR .ad .sp .6 .RS 4n Displays the configuration that would be used without actually creating the pool. The actual pool creation can still fail due to insufficient privileges or device sharing. .RE .sp .ne 2 .na \fB\fB-d\fR\fR .ad .sp .6 .RS 4n Do not enable any features on the new pool. Individual features can be enabled by setting their corresponding properties to \fBenabled\fR with the \fB-o\fR option. See \fBzpool-features\fR(5) for details about feature properties. .RE .sp .ne 2 .na \fB\fB-o\fR \fIproperty=value\fR [\fB-o\fR \fIproperty=value\fR] ...\fR .ad .sp .6 .RS 4n Sets the given pool properties. See the "Properties" section for a list of valid properties that can be set. .RE .sp .ne 2 .na \fB\fB-o\fR feature@\fIfeature=value\fR [\fB-o\fR feature@\fIfeature=value\fR] ...\fR .ad .sp .6 .RS 4n Sets the given pool feature. See \fBzpool-features(5)\fR for a list of valid features that can be set. .sp Value can be either \fBdisabled\fR or \fBenabled\fR. .RE .sp .ne 2 .na \fB\fB-O\fR \fIfile-system-property=value\fR\fR .ad .br .na \fB[\fB-O\fR \fIfile-system-property=value\fR] ...\fR .ad .sp .6 .RS 4n Sets the given file system properties in the root file system of the pool. See the "Properties" section of \fBzfs\fR(8) for a list of valid properties that can be set. .RE .sp .ne 2 .na \fB\fB-R\fR \fIroot\fR\fR .ad .sp .6 .RS 4n Equivalent to "-o cachefile=none,altroot=\fIroot\fR" .RE .sp .ne 2 .na \fB\fB-m\fR \fImountpoint\fR\fR .ad .sp .6 .RS 4n Sets the mount point for the root dataset. The default mount point is "/\fIpool\fR" or "\fBaltroot\fR/\fIpool\fR" if \fBaltroot\fR is specified. The mount point must be an absolute path, "\fBlegacy\fR", or "\fBnone\fR". For more information on dataset mount points, see \fBzfs\fR(8). .RE .sp .ne 2 .na \fB\fB-t\fR \fItname\fR\fR .ad .sp .6 .RS 4n Sets the in-core pool name to "\fBtname\fR" while the on-disk name will be the name specified as the pool name "\fBpool\fR". This will set the default cachefile property to none. This is intended to handle name space collisions when creating pools for other systems, such as virtual machines or physical machines whose pools live on network block devices. .RE .RE .sp .ne 2 .na \fB\fBzpool destroy\fR [\fB-f\fR] \fIpool\fR\fR .ad .sp .6 .RS 4n Destroys the given pool, freeing up any devices for other use. This command tries to unmount any active datasets before destroying the pool. .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 6n Forces any active datasets contained within the pool to be unmounted. .RE .RE .sp .ne 2 .na \fB\fBzpool detach\fR \fIpool\fR \fIdevice\fR\fR .ad .sp .6 .RS 4n Detaches \fIdevice\fR from a mirror. The operation is refused if there are no other valid replicas of the data. If \fIdevice\fR may be re-added to the pool later on then consider the "\fBzpool offline\fR" command instead. .RE .RE .sp .ne 2 .na \fBzpool events\fR [\fB-vHfc\fR] [\fIpool\fR] ... .ad .sp .6 .RS 4n Description of the different events generated by the ZFS kernel modules. See \fBzfs-events\fR(5) for more information about the subclasses and event payloads that can be generated. .sp .ne 2 .na \fB\fB-v\fR\fR .ad .RS 6n Get a full detail of the events and what information is available about it. .RE .sp .ne 2 .na \fB\fB-H\fR\fR .ad .RS 6n Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space. .RE .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 6n Follow mode. .RE .sp .ne 2 .na \fB\fB-c\fR\fR .ad .RS 6n Clear all previous events. .RE .RE .sp .ne 2 .na \fB\fBzpool export\fR [\fB-a\fR] [\fB-f\fR] \fIpool\fR ...\fR .ad .sp .6 .RS 4n Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems. The devices can be moved between systems (even those of different endianness) and imported as long as a sufficient number of devices are present. .sp Before exporting the pool, all datasets within the pool are unmounted. A pool can not be exported if it has a shared spare that is currently being used. .sp For pools to be portable, you must give the \fBzpool\fR command whole disks, not just partitions, so that \fBZFS\fR can label the disks with portable \fBEFI\fR labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks. .sp .ne 2 .na \fB\fB-a\fR\fR .ad .RS 6n Exports all pools imported on the system. .RE .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 6n Forcefully unmount all datasets, using the "\fBunmount -f\fR" command. .sp This command will forcefully export the pool even if it has a shared spare that is currently being used. This may lead to potential data corruption. .RE .RE .sp .ne 2 .na \fB\fBzpool get\fR [\fB-Hp\fR] [\fB-o \fR\fIfield\fR[,...]] "\fIall\fR" | \fIproperty\fR[,...] \fIpool\fR ...\fR .ad .sp .6 .RS 4n Retrieves the given list of properties (or all properties if "\fBall\fR" is used) for the specified storage pool(s). These properties are displayed with the following fields: .sp .in +2 .nf name Name of storage pool property Property name value Property value source Property source, either 'default' or 'local'. .fi .in -2 .sp See the "Properties" section for more information on the available pool properties. .sp .ne 2 .na \fB\fB-H\fR\fR .ad .RS 6n Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space. .RE .sp .ne 2 .na \fB\fB-p\fR\fR .ad .RS 6n Display numbers in parsable (exact) values. .RE .sp .ne 2 .na \fB\fB-o\fR \fIfield\fR\fR .ad .RS 12n A comma-separated list of columns to display. \fBname,property,value,source\fR is the default value. .RE .RE .sp .ne 2 .na \fB\fBzpool history\fR [\fB-il\fR] [\fIpool\fR] ...\fR .ad .sp .6 .RS 4n Displays the command history of the specified pools or all pools if no pool is specified. .sp .ne 2 .na \fB\fB-i\fR\fR .ad .RS 6n Displays internally logged \fBZFS\fR events in addition to user initiated events. .RE .sp .ne 2 .na \fB\fB-l\fR\fR .ad .RS 6n Displays log records in long format, which in addition to standard format includes, the user name, the hostname, and the zone in which the operation was performed. .RE .RE .sp .ne 2 .na \fB\fBzpool import\fR [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR]\fR .ad .sp .6 .RS 4n Lists pools available to import. If the \fB-d\fR option is not specified, this command searches for devices in "/dev". The \fB-d\fR option can be specified multiple times, and all directories are searched. If the device appears to be part of an exported pool, this command displays a summary of the pool with the name of the pool, a numeric identifier, as well as the \fIvdev\fR layout and current health of the device for each device or file. Destroyed pools, pools that were previously destroyed with the "\fBzpool destroy\fR" command, are not listed unless the \fB-D\fR option is specified. .sp The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available. .sp .ne 2 .na \fB\fB-c\fR \fIcachefile\fR\fR .ad .RS 16n Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices. .RE .sp .ne 2 .na \fB\fB-d\fR \fIdir\fR\fR .ad .RS 16n Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times. .RE .sp .ne 2 .na \fB\fB-D\fR\fR .ad .RS 16n Lists destroyed pools only. .RE .RE .sp .ne 2 .na \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR] [\fB-f\fR] [\fB-m\fR] [\fB-N\fR] [\fB-R\fR \fIroot\fR] [\fB-F\fR [\fB-n\fR]] [\fB-s\fR] \fB-a\fR\fR .ad .sp .6 .RS 4n Imports all pools found in the search directories. Identical to the previous command, except that all pools with a sufficient number of devices available are imported. Destroyed pools, pools that were previously destroyed with the "\fBzpool destroy\fR" command, will not be imported unless the \fB-D\fR option is specified. .sp .ne 2 .na \fB\fB-o\fR \fImntopts\fR\fR .ad .RS 21n Comma-separated list of mount options to use when mounting datasets within the pool. See \fBzfs\fR(8) for a description of dataset properties and mount options. .RE .sp .ne 2 .na \fB\fB-o\fR \fIproperty=value\fR\fR .ad .RS 21n Sets the specified property on the imported pool. See the "Properties" section for more information on the available pool properties. .RE .sp .ne 2 .na \fB\fB-c\fR \fIcachefile\fR\fR .ad .RS 21n Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices. .RE .sp .ne 2 .na \fB\fB-d\fR \fIdir\fR\fR .ad .RS 21n Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times. This option is incompatible with the \fB-c\fR option. .RE .sp .ne 2 .na \fB\fB-D\fR\fR .ad .RS 21n Imports destroyed pools only. The \fB-f\fR option is also required. .RE .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 21n Forces import, even if the pool appears to be potentially active. .RE .sp .ne 2 .na \fB\fB-F\fR\fR .ad .RS 21n Recovery mode for a non-importable pool. Attempt to return the pool to an importable state by discarding the last few transactions. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost. This option is ignored if the pool is importable or already imported. .RE .sp .ne 2 .na \fB\fB-a\fR\fR .ad .RS 21n Searches for and imports all pools found. .RE .sp .ne 2 .na \fB\fB-m\fR\fR .ad .RS 21n Allows a pool to import when there is a missing log device. .RE .sp .ne 2 .na \fB\fB-R\fR \fIroot\fR\fR .ad .RS 21n Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR" property to "\fIroot\fR". .RE .sp .ne 2 .na \fB\fB-N\fR\fR .ad .RS 21n Import the pool without mounting any file systems. .RE .sp .ne 2 .na \fB\fB-n\fR\fR .ad .RS 21n Used with the \fB-F\fR recovery option. Determines whether a non-importable pool can be made importable again, but does not actually perform the pool recovery. For more details about pool recovery mode, see the \fB-F\fR option, above. .RE .sp .ne 2 .na \fB\fB-X\fR\fR .ad .RS 21n Used with the \fB-F\fR recovery option. Determines whether extreme measures to find a valid txg should take place. This allows the pool to be rolled back to a txg which is no longer guaranteed to be consistent. Pools imported at an inconsistent txg may contain uncorrectable checksum errors. For more details about pool recovery mode, see the \fB-F\fR option, above. \fBWARNING\fR: This option can be extremely hazardous to the health of your pool and should only be used as a last resort. .RE .sp .ne 2 .na \fB\fB-T\fR\fR .ad .RS 21n Specify the txg to use for rollback. Implies \fB-FX\fR. For more details about pool recovery mode, see the \fB-X\fR option, above. \fBWARNING\fR: This option can be extremely hazardous to the health of your pool and should only be used as a last resort. .RE .sp .ne 2 .na \fB\fB-s\fR .ad .RS 21n Scan using the default search path, the libblkid cache will not be consulted. A custom search path may be specified by setting the \fBZPOOL_IMPORT_PATH\fR environment variable. .RE .RE .sp .ne 2 .na \fB\fBzpool import\fR [\fB-o\fR \fImntopts\fR] [ \fB-o\fR \fIproperty\fR=\fIvalue\fR] ... [\fB-d\fR \fIdir\fR | \fB-c\fR \fIcachefile\fR] [\fB-D\fR] [\fB-f\fR] [\fB-m\fR] [\fB-R\fR \fIroot\fR] [\fB-F\fR [\fB-n\fR]] [\fB-t\fR]] [\fB-s\fR] \fIpool\fR | \fIid\fR [\fInewpool\fR]\fR .ad .sp .6 .RS 4n Imports a specific pool. A pool can be identified by its name or the numeric identifier. If \fInewpool\fR is specified, the pool is imported using the name \fInewpool\fR. Otherwise, it is imported with the same name as its exported name. .sp If a device is removed from a system without running "\fBzpool export\fR" first, the device appears as potentially active. It cannot be determined if this was a failed export, or whether the device is really in use from another host. To import a pool in this state, the \fB-f\fR option is required. .sp .ne 2 .na \fB\fB-o\fR \fImntopts\fR\fR .ad .sp .6 .RS 4n Comma-separated list of mount options to use when mounting datasets within the pool. See \fBzfs\fR(8) for a description of dataset properties and mount options. .RE .sp .ne 2 .na \fB\fB-o\fR \fIproperty=value\fR\fR .ad .sp .6 .RS 4n Sets the specified property on the imported pool. See the "Properties" section for more information on the available pool properties. .RE .sp .ne 2 .na \fB\fB-c\fR \fIcachefile\fR\fR .ad .sp .6 .RS 4n Reads configuration from the given \fBcachefile\fR that was created with the "\fBcachefile\fR" pool property. This \fBcachefile\fR is used instead of searching for devices. .RE .sp .ne 2 .na \fB\fB-d\fR \fIdir\fR\fR .ad .sp .6 .RS 4n Searches for devices or files in \fIdir\fR. The \fB-d\fR option can be specified multiple times. This option is incompatible with the \fB-c\fR option. .RE .sp .ne 2 .na \fB\fB-D\fR\fR .ad .sp .6 .RS 4n Imports destroyed pool. The \fB-f\fR option is also required. .RE .sp .ne 2 .na \fB\fB-f\fR\fR .ad .sp .6 .RS 4n Forces import, even if the pool appears to be potentially active. .RE .sp .ne 2 .na \fB\fB-F\fR\fR .ad .sp .6 .RS 4n Recovery mode for a non-importable pool. Attempt to return the pool to an importable state by discarding the last few transactions. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost. This option is ignored if the pool is importable or already imported. .RE .sp .ne 2 .na \fB\fB-R\fR \fIroot\fR\fR .ad .sp .6 .RS 4n Sets the "\fBcachefile\fR" property to "\fBnone\fR" and the "\fIaltroot\fR" property to "\fIroot\fR". .RE .sp .ne 2 .na \fB\fB-n\fR\fR .ad .sp .6 .RS 4n Used with the \fB-F\fR recovery option. Determines whether a non-importable pool can be made importable again, but does not actually perform the pool recovery. For more details about pool recovery mode, see the \fB-F\fR option, above. .RE .sp .ne 2 .na \fB\fB-X\fR\fR .ad .sp .6 .RS 4n Used with the \fB-F\fR recovery option. Determines whether extreme measures to find a valid txg should take place. This allows the pool to be rolled back to a txg which is no longer guaranteed to be consistent. Pools imported at an inconsistent txg may contain uncorrectable checksum errors. For more details about pool recovery mode, see the \fB-F\fR option, above. \fBWARNING\fR: This option can be extremely hazardous to the health of your pool and should only be used as a last resort. .RE .sp .ne 2 .na \fB\fB-T\fR\fR .ad .sp .6 .RS 4n Specify the txg to use for rollback. Implies \fB-FX\fR. For more details about pool recovery mode, see the \fB-X\fR option, above. \fBWARNING\fR: This option can be extremely hazardous to the health of your pool and should only be used as a last resort. .RE .sp .ne 2 .na \fB\fB-t\fR\fR .ad .sp .6 .RS 4n Used with "\fBnewpool\fR". Specifies that "\fBnewpool\fR" is temporary. Temporary pool names last until export. Ensures that the original pool name will be used in all label updates and therefore is retained upon export. Will also set -o cachefile=none when not explicitly specified. .RE .sp .ne 2 .na \fB\fB-m\fR\fR .ad .sp .6 .RS 4n Allows a pool to import when there is a missing log device. .RE .sp .ne 2 .na \fB\fB-s\fR .ad .sp .6 .RS 4n Scan using the default search path, the libblkid cache will not be consulted. A custom search path may be specified by setting the \fBZPOOL_IMPORT_PATH\fR environment variable. .RE .RE .sp .ne 2 .na \fB\fBzpool iostat\fR [[[\fB-c\fR \fBSCRIPT\fR] [\fB-lq\fR]] | \fB-rw\fR] [\fB-T\fR \fBd\fR | \fBu\fR] [\fB-ghHLpPvy\fR] [[\fIpool\fR ...]|[\fIpool vdev\fR ...]|[\fIvdev\fR ...]] [\fIinterval\fR[\fIcount\fR]]\fR .ad .sp .6 .RS 4n Displays \fBI/O\fR statistics for the given \fIpool\fRs/\fIvdev\fRs. You can pass in a list of \fIpool\fRs, a \fIpool\fR and list of \fIvdev\fRs in that \fIpool\fR, or a list of any \fIvdev\fRs from any \fIpool\fR. If no items are specified, statistics for every pool in the system are shown. When given an interval, the statistics are printed every \fIinterval\fR seconds until \fBCtrl-C\fR is pressed. If \fIcount\fR is specified, the command exits after \fIcount\fR reports are printed. The first report printed is always the statistics since boot regardless of whether \fIinterval\fR and \fIcount\fR are passed. However, this behavior can be suppressed with the -y flag. Also note that the units of 'K', 'M', 'G'... that are printed in the report are in base 1024. To get the raw values, use the \fB-p\fR flag. .sp .ne 2 .na \fB\fB-c\fR \fB[SCRIPT1,SCRIPT2,...]\fR .ad .RS 12n Run a script (or scripts) on each vdev and include the output in zpool iostat .sp The \fB-c\fR option allows you to run script(s) for each vdev and display the output in zpool iostat. For security reasons, a user can only execute scripts found in the //zfs/zpool.d directory as an unprivileged user. However, a privileged user can run \fB-c\fR if they have the ZPOOL_SCRIPTS_AS_ROOT environment variable set. If a script requires the use of a privileged command (like smartctl) then it's recommended you allow the user access to it in /etc/sudoers. For example, to allow user "zfsuser" access to "smartctl -a", add the following to /etc/sudoers: zfsuser ALL=NOPASSWD: /usr/sbin/smartctl -a /dev/sd[a-z]*, NOEXEC: /usr/sbin/smartctl -a /dev/sd[a-z]*` If \fB-c\fR is passed without a script name, it prints a list of all scripts. \fB-c\fR also sets verbose mode (\fB-v\fR). Script output should be in the form of "name=value". The column name is set to "name" and the value is set to "value". Multiple lines can be used to output multiple columns. The first line of output not in the "name=value" format is displayed without a column title, and no more output after that is displayed. This can be useful for printing error messages. Blank or NULL values are printed as a '-' to make output awk-able. The following environment variables are set before running each script: .sp \fB$VDEV_PATH\fR: Full path to the vdev. .LP \fB$VDEV_UPATH\fR: "Underlying path" to the vdev. For device mapper, multipath, or partitioned vdevs, \fBVDEV_UPATH\fR is the actual underlying /dev/sd* disk. This can be useful if the command you're running requires a /dev/sd* device. .LP \fB$VDEV_ENC_SYSFS_PATH\fR: The sysfs path to the vdev's enclosure LEDs (if any). .RE .sp .ne 2 .na \fB\fB-T\fR \fBu\fR | \fBd\fR\fR .ad .RS 12n Display a time stamp. .sp Specify \fBu\fR for a printed representation of the internal representation of time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See \fBdate\fR(1). .RE .sp .ne 2 .na \fB\fB-g\fR\fR .ad .RS 12n Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands. .RE .sp .ne 2 .na \fB\fB-H\fR\fR .ad .RS 12n Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space. .RE .sp .ne 2 .na \fB\fB-L\fR\fR .ad .RS 12n Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it. .RE .sp .ne 2 .na \fB\fB-p\fR\fR .ad .RS 12n Display numbers in parsable (exact) values. Time values are in nanoseconds. .RE .sp .ne 2 .na \fB\fB-P\fR\fR .ad .RS 12n Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the \fB-L\fR flag. .RE .sp .ne 2 .na \fB\fB-r\fR\fR .ad .RS 12n Print request size histograms for the leaf ZIOs. This includes histograms of individual ZIOs ("ind") and aggregate ZIOs ("agg"). These stats can be useful for seeing how well the ZFS IO aggregator is working. Do not confuse these request size stats with the block layer requests; it's possible ZIOs can be broken up before being sent to the block device. .RE .sp .ne 2 .na \fB\fB-v\fR\fR .ad .RS 12n Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within the pool, in addition to the pool-wide statistics. .RE .sp .ne 2 .na \fB\fB-y\fR\fR .ad .RS 12n Omit statistics since boot. Normally the first line of output reports the statistics since boot. This option suppresses that first line of output. .RE .sp .ne 2 .na \fB\fB-w\fR\fR .ad .RS 12n Display latency histograms: .sp .ne 2 .na total_wait: .ad .RS 20n Total IO time (queuing + disk IO time). .RE .ne 2 .na disk_wait: .ad .RS 20n Disk IO time (time reading/writing the disk). .RE .ne 2 .na syncq_wait: .ad .RS 20n Amount of time IO spent in synchronous priority queues. Does not include disk time. .RE .ne 2 .na asyncq_wait: .ad .RS 20n Amount of time IO spent in asynchronous priority queues. Does not include disk time. .RE .ne 2 .na scrub: .ad .RS 20n Amount of time IO spent in scrub queue. Does not include disk time. .RE All histogram buckets are power-of-two sized. The time labels are the end ranges of the buckets, so for example, a 15ns bucket stores latencies from 8-15ns. The last bucket is also a catch-all for latencies higher than the maximum. .RE .sp .ne 2 .na \fB\fB-l\fR\fR .ad .RS 12n Include average latency statistics: .sp .ne 2 .na total_wait: .ad .RS 20n Average total IO time (queuing + disk IO time). .RE .ne 2 .na disk_wait: .ad .RS 20n Average disk IO time (time reading/writing the disk). .RE .ne 2 .na syncq_wait: .ad .RS 20n Average amount of time IO spent in synchronous priority queues. Does not include disk time. .RE .ne 2 .na asyncq_wait: .ad .RS 20n Average amount of time IO spent in asynchronous priority queues. Does not include disk time. .RE .ne 2 .na scrub: .ad .RS 20n Average queuing time in scrub queue. Does not include disk time. .RE .RE .sp .ne 2 .na \fB\fB-q\fR\fR .ad .RS 12n Include active queue statistics. Each priority queue has both pending ("pend") and active ("activ") IOs. Pending IOs are waiting to be issued to the disk, and active IOs have been issued to disk and are waiting for completion. These stats are broken out by priority queue: .sp .ne 2 .na syncq_read/write: .ad .RS 20n Current number of entries in synchronous priority queues. .RE .ne 2 .na asyncq_read/write: .ad .RS 20n Current number of entries in asynchronous priority queues. .RE .ne 2 .na scrubq_read: .ad .RS 20n Current number of entries in scrub queue. .RE All queue statistics are instantaneous measurements of the number of entries in the queues. If you specify an interval, the measurements will be sampled from the end of the interval. .RE .RE .sp .ne 2 .na \fB\fBzpool labelclear\fR [\fB-f\fR] \fIdevice\fR .ad .sp .6 .RS 4n Removes ZFS label information from the specified device. The device must not be part of an active pool configuration. .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 12n Treat exported or foreign devices as inactive. .RE .RE .sp .ne 2 .na \fB\fBzpool list\fR [\fB-T\fR \fBd\fR | \fBu\fR] [\fB-HgLpPv\fR] [\fB-o\fR \fIprops\fR[,...]] [\fIpool\fR] ... [\fIinterval\fR[\fIcount\fR]]\fR .ad .sp .6 .RS 4n Lists the given pools along with a health status and space usage. If no \fIpools\fR are specified, all pools in the system are listed. When given an \fIinterval\fR, the information is printed every \fIinterval\fR seconds until \fBCtrl-C\fR is pressed. If \fIcount\fR is specified, the command exits after \fIcount\fR reports are printed. .sp .ne 2 .na \fB\fB-H\fR\fR .ad .RS 12n Scripted mode. Do not display headers, and separate fields by a single tab instead of arbitrary space. .RE .sp .ne 2 .na \fB\fB-g\fR\fR .ad .RS 12n Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands. .RE .sp .ne 2 .na \fB\fB-L\fR\fR .ad .RS 12n Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it. .RE .sp .ne 2 .na \fB\fB-p\fR\fR .ad .RS 12n Display numbers in parsable (exact) values. .RE .sp .ne 2 .na \fB\fB-P\fR\fR .ad .RS 12n Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the \fB-L\fR flag. .RE .sp .ne 2 .na \fB\fB-T\fR \fBd\fR | \fBu\fR\fR .ad .RS 12n Display a time stamp. .sp Specify \fBu\fR for a printed representation of the internal representation of time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See \fBdate\fR(1). .RE .sp .ne 2 .na \fB\fB-o\fR \fIprops\fR\fR .ad .RS 12n Comma-separated list of properties to display. See the "Properties" section for a list of valid properties. The default list is "name, size, alloc, free, fragmentation, expandsize, capacity, dedupratio, health, altroot" .RE .sp .ne 2 .na \fB\fB-v\fR\fR .ad .RS 12n Verbose statistics. Reports usage statistics for individual \fIvdevs\fR within the pool, in addition to the pool-wise statistics. .RE .RE .sp .ne 2 .na \fB\fBzpool offline\fR [\fB-t\fR] \fIpool\fR \fIdevice\fR ...\fR .ad .sp .6 .RS 4n Takes the specified physical device offline. While the \fIdevice\fR is offline, no attempt is made to read or write to the device. .sp This command is not applicable to spares or cache devices. .sp .ne 2 .na \fB\fB-t\fR\fR .ad .RS 6n Temporary. Upon reboot, the specified physical device reverts to its previous state. .RE .RE .sp .ne 2 .na \fB\fBzpool online\fR [\fB-e\fR] \fIpool\fR \fIdevice\fR...\fR .ad .sp .6 .RS 4n Brings the specified physical device online. .sp This command is not applicable to spares or cache devices. .sp .ne 2 .na \fB\fB-e\fR\fR .ad .RS 6n Expand the device to use all available space. If the device is part of a mirror or \fBraidz\fR then all devices must be expanded before the new space will become available to the pool. .RE .RE .sp .ne 2 .na \fB\fBzpool reguid\fR \fIpool\fR .ad .sp .6 .RS 4n Generates a new unique identifier for the pool. You must ensure that all devices in this pool are online and healthy before performing this action. .RE .sp .ne 2 .na \fB\fBzpool reopen\fR \fIpool\fR .ad .sp .6 .RS 4n Reopen all the vdevs associated with the pool. .RE .sp .ne 2 .na \fB\fBzpool remove\fR \fIpool\fR \fIdevice\fR ...\fR .ad .sp .6 .RS 4n Removes the specified device from the pool. This command currently only supports removing hot spares, cache, and log devices. A mirrored log device can be removed by specifying the top-level mirror for the log. Non-log devices that are part of a mirrored configuration can be removed using the \fBzpool detach\fR command. Non-redundant and \fBraidz\fR devices cannot be removed from a pool. .RE .sp .ne 2 .na \fB\fBzpool replace\fR [\fB-f\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fIold_device\fR [\fInew_device\fR]\fR .ad .sp .6 .RS 4n Replaces \fIold_device\fR with \fInew_device\fR. This is equivalent to attaching \fInew_device\fR, waiting for it to resilver, and then detaching \fIold_device\fR. .sp The size of \fInew_device\fR must be greater than or equal to the minimum size of all the devices in a mirror or \fBraidz\fR configuration. .sp \fInew_device\fR is required if the pool is not redundant. If \fInew_device\fR is not specified, it defaults to \fIold_device\fR. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same \fB/dev\fR path as the old device, even though it is actually a different disk. \fBZFS\fR recognizes this. .sp .ne 2 .na \fB\fB-f\fR\fR .ad .RS 6n Forces use of \fInew_device\fR, even if its appears to be in use. Not all devices can be overridden in this manner. .RE .sp .ne 2 .na \fB\fB-o\fR \fIproperty=value\fR .ad .sp .6n .RS 6n Sets the given pool properties. See the "Properties" section for a list of valid properties that can be set. The only property supported at the moment is \fBashift\fR. \fBDo note\fR that some properties (among them \fBashift\fR) are \fInot\fR inherited from a previous vdev. They are vdev specific, not pool specific. .RE .RE .sp .ne 2 .na \fB\fBzpool scrub\fR [\fB-s\fR] \fIpool\fR ...\fR .ad .sp .6 .RS 4n Begins a scrub. The scrub examines all data in the specified pools to verify that it checksums correctly. For replicated (mirror or \fBraidz\fR) devices, \fBZFS\fR automatically repairs any damage discovered during the scrub. The "\fBzpool status\fR" command reports the progress of the scrub and summarizes the results of the scrub upon completion. .sp Scrubbing and resilvering are very similar operations. The difference is that resilvering only examines data that \fBZFS\fR knows to be out of date (for example, when attaching a new device to a mirror or replacing an existing device), whereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure. .sp Because scrubbing and resilvering are \fBI/O\fR-intensive operations, \fBZFS\fR only allows one at a time. If a scrub is already in progress, the "\fBzpool scrub\fR" command terminates it and starts a new scrub. If a resilver is in progress, \fBZFS\fR does not allow a scrub to be started until the resilver completes. .sp .ne 2 .na \fB\fB-s\fR\fR .ad .RS 6n Stop scrubbing. .RE .RE .sp .ne 2 .na \fB\fBzpool set\fR \fIproperty\fR=\fIvalue\fR \fIpool\fR\fR .ad .sp .6 .RS 4n Sets the given property on the specified pool. See the "Properties" section for more information on what properties can be set and acceptable values. .RE .sp .ne 2 .na \fBzpool split\fR [\fB-gLnP\fR] [\fB-R\fR \fIaltroot\fR] [\fB-o\fR \fIproperty=value\fR] \fIpool\fR \fInewpool\fR [\fIdevice\fR ...] .ad .sp .6 .RS 4n Split devices off \fIpool\fR creating \fInewpool\fR. All \fBvdev\fRs in \fIpool\fR must be mirrors and the pool must not be in the process of resilvering. At the time of the split, \fInewpool\fR will be a replica of \fIpool\fR. By default, the last device in each mirror is split from \fIpool\fR to create \fInewpool\fR. The optional \fIdevice\fR specification causes the specified device(s) to be included in the new pool and, should any devices remain unspecified, the last device in each mirror is used as would be by default. .sp .ne 2 .na \fB\fB-g\fR\fR .ad .RS 6n Display vdev GUIDs instead of the normal device names. These GUIDs can be used in place of device names for the zpool detach/offline/remove/replace commands. .RE .sp .ne 2 .na \fB\fB-L\fR\fR .ad .RS 6n Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it. .RE .sp .ne 2 .na \fB\fB-n\fR \fR .ad .sp .6 .RS 4n Do dry run, do not actually perform the split. Print out the expected configuration of \fInewpool\fR. .RE .sp .ne 2 .na \fB\fB-P\fR\fR .ad .RS 6n Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the \fB-L\fR flag. .RE .sp .ne 2 .na \fB\fB-R\fR \fIaltroot\fR \fR .ad .sp .6 .RS 4n Set \fIaltroot\fR for \fInewpool\fR and automatically import it. This can be useful to avoid mountpoint collisions if \fInewpool\fR is imported on the same filesystem as \fIpool\fR. .RE .sp .ne 2 .na \fB\fB-o\fR \fIproperty=value\fR \fR .ad .sp .6 .RS 4n Sets the specified property for \fInewpool\fR. See the “Properties” section for more information on the available pool properties. .RE .RE .sp .ne 2 .na \fBzpool status\fR [\fB-c\fR \fB[SCRIPT1,SCRIPT2,...] \fR] [\fB-gLPvxD\fR] [\fB-T\fR d | u] [\fIpool\fR] ... [\fIinterval\fR [\fIcount\fR]] .ad .sp .6 .RS 4n Displays the detailed health status for the given pools. If no \fIpool\fR is specified, then the status of each pool in the system is displayed. For more information on pool and device health, see the "Device Failure and Recovery" section. .sp If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change. .sp .ne 2 .na \fB\fB-c\fR \fB[SCRIPT1,SCRIPT2,...]\fR .ad .RS 12n Run a script (or scripts) on each vdev and include the output in zpool status .sp The \fB-c\fR option allows you to run script(s) for each vdev and display the output in zpool iostat. For security reasons, a user can only execute scripts found in the //zfs/zpool.d directory as an unprivileged user. However, a privileged user can run \fB-c\fR if they have the ZPOOL_SCRIPTS_AS_ROOT environment variable set. If a script requires the use of a privileged command (like smartctl) then it's recommended you allow the user access to it in /etc/sudoers. For example, to allow user "zfsuser" access to "smartctl -a", add the following to /etc/sudoers: zfsuser ALL=NOPASSWD: /usr/sbin/smartctl -a /dev/sd[a-z]*, NOEXEC: /usr/sbin/smartctl -a /dev/sd[a-z]*` If \fB-c\fR is passed without a script name, it prints a list of all scripts. Script output should be in the form of "name=value". The column name is set to "name" and the value is set to "value". Multiple lines can be used to output multiple columns. The first line of output not in the "name=value" format is displayed without a column title, and no more output after that is displayed. This can be useful for printing error messages. Blank or NULL values are printed as a '-' to make output awk-able. The following environment variables are set before running each command: .sp \fB$VDEV_PATH\fR: Full path to the vdev. .LP \fB$VDEV_UPATH\fR: "Underlying path" to the vdev. For device mapper, multipath, or partitioned vdevs, \fBVDEV_UPATH\fR is the actual underlying /dev/sd* disk. This can be useful if the command you're running requires a /dev/sd* device. .LP \fB$VDEV_ENC_SYSFS_PATH\fR: The sysfs path to the vdev's enclosure LEDs (if any). .RE .sp .ne 2 .na \fB\fB-g\fR\fR .ad .RS 12n Display vdev GUIDs instead of the normal device names. These GUIDs can be used innplace of device names for the zpool detach/offline/remove/replace commands. .RE .sp .ne 2 .na \fB\fB-L\fR\fR .ad .RS 12n Display real paths for vdevs resolving all symbolic links. This can be used to look up the current block device name regardless of the /dev/disk/ path used to open it. .RE .sp .ne 2 .na \fB\fB-P\fR\fR .ad .RS 12n Display full paths for vdevs instead of only the last component of the path. This can be used in conjunction with the \fB-L\fR flag. .RE .sp .ne 2 .na \fB\fB-v\fR\fR .ad .RS 12n Displays verbose data error information, printing out a complete list of all data errors since the last complete pool scrub. .RE .sp .ne 2 .na \fB\fB-x\fR\fR .ad .RS 12n Only display status for pools that are exhibiting errors or are otherwise unavailable. Warnings about pools not using the latest on-disk format will not be included. .RE .sp .ne 2 .na \fB\fB-D\fR\fR .ad .RS 12n Display a histogram of deduplication statistics, showing the allocated (physically present on disk) and referenced (logically referenced in the pool) block counts and sizes by reference count. .RE .sp .ne 2 .na \fB\fB-T\fR \fBd\fR | \fBu\fR\fR .ad .RS 12n Display a time stamp. .sp Specify \fBu\fR for a printed representation of the internal representation of time. See \fBtime\fR(2). Specify \fBd\fR for standard date format. See \fBdate\fR(1). .RE .RE .sp .ne 2 .na \fB\fBzpool upgrade\fR\fR .ad .sp .6 .RS 4n Displays pools which do not have all supported features enabled and pools formatted using a legacy ZFS version number. These pools can continue to be used, but some features may not be available. Use "\fBzpool upgrade -a\fR" to enable all features on all pools. .RE .sp .ne 2 .na \fB\fBzpool upgrade\fR \fB-v\fR\fR .ad .sp .6 .RS 4n Displays legacy \fBZFS\fR versions supported by the current software. See \fBzfs-features\fR(5) for a description of feature flags features supported by the current software. .RE .sp .ne 2 .na \fB\fBzpool upgrade\fR [\fB-V\fR \fIversion\fR] \fB-a\fR | \fIpool\fR ...\fR .ad .sp .6 .RS 4n Enables all supported features on the given pool. Once this is done, the pool will no longer be accessible on systems that do not support feature flags. See \fBzfs-features\fR(5) for details on compatibility with systems that support feature flags, but do not support all features enabled on the pool. .sp .ne 2 .na \fB\fB-a\fR\fR .ad .RS 14n Enables all supported features on all pools. .RE .sp .ne 2 .na \fB\fB-V\fR \fIversion\fR\fR .ad .RS 14n Upgrade to the specified legacy version. If the \fB-V\fR flag is specified, no features will be enabled on the pool. This option can only be used to increase the version number up to the last supported legacy version number. .RE .RE .SH EXAMPLES .LP \fBExample 1 \fRCreating a RAID-Z Storage Pool .sp .LP The following command creates a pool with a single \fBraidz\fR root \fIvdev\fR that consists of six disks. .sp .in +2 .nf # \fBzpool create tank raidz sda sdb sdc sdd sde sdf\fR .fi .in -2 .sp .LP \fBExample 2 \fRCreating a Mirrored Storage Pool .sp .LP The following command creates a pool with two mirrors, where each mirror contains two disks. .sp .in +2 .nf # \fBzpool create tank mirror sda sdb mirror sdc sdd\fR .fi .in -2 .sp .LP \fBExample 3 \fRCreating a ZFS Storage Pool by Using Partitions .sp .LP The following command creates an unmirrored pool using two disk partitions. .sp .in +2 .nf # \fBzpool create tank sda1 sdb2\fR .fi .in -2 .sp .LP \fBExample 4 \fRCreating a ZFS Storage Pool by Using Files .sp .LP The following command creates an unmirrored pool using files. While not recommended, a pool based on files can be useful for experimental purposes. .sp .in +2 .nf # \fBzpool create tank /path/to/file/a /path/to/file/b\fR .fi .in -2 .sp .LP \fBExample 5 \fRAdding a Mirror to a ZFS Storage Pool .sp .LP The following command adds two mirrored disks to the pool \fItank\fR, assuming the pool is already made up of two-way mirrors. The additional space is immediately available to any datasets within the pool. .sp .in +2 .nf # \fBzpool add tank mirror sda sdb\fR .fi .in -2 .sp .LP \fBExample 6 \fRListing Available ZFS Storage Pools .sp .LP The following command lists all available pools on the system. In this case, the pool \fIzion\fR is faulted due to a missing device. .sp .LP The results from this command are similar to the following: .sp .in +2 .nf # \fBzpool list\fR NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE - tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE - zion - - - - - - - FAULTED - .fi .in -2 .sp .LP \fBExample 7 \fRDestroying a ZFS Storage Pool .sp .LP The following command destroys the pool \fItank\fR and any datasets contained within. .sp .in +2 .nf # \fBzpool destroy -f tank\fR .fi .in -2 .sp .LP \fBExample 8 \fRExporting a ZFS Storage Pool .sp .LP The following command exports the devices in pool \fItank\fR so that they can be relocated or later imported. .sp .in +2 .nf # \fBzpool export tank\fR .fi .in -2 .sp .LP \fBExample 9 \fRImporting a ZFS Storage Pool .sp .LP The following command displays available pools, and then imports the pool \fItank\fR for use on the system. .sp .LP The results from this command are similar to the following: .sp .in +2 .nf # \fBzpool import\fR pool: tank id: 15451357997522795478 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror ONLINE sda ONLINE sdb ONLINE # \fBzpool import tank\fR .fi .in -2 .sp .LP \fBExample 10 \fRUpgrading All ZFS Storage Pools to the Current Version .sp .LP The following command upgrades all ZFS Storage pools to the current version of the software. .sp .in +2 .nf # \fBzpool upgrade -a\fR This system is currently running ZFS pool version 28. .fi .in -2 .sp .LP \fBExample 11 \fRManaging Hot Spares .sp .LP The following command creates a new pool with an available hot spare: .sp .in +2 .nf # \fBzpool create tank mirror sda sdb spare sdc\fR .fi .in -2 .sp .sp .LP If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command: .sp .in +2 .nf # \fBzpool replace tank sda sdd\fR .fi .in -2 .sp .sp .LP Once the data has been resilvered, the spare is automatically removed and is made available for use should another device fails. The hot spare can be permanently removed from the pool using the following command: .sp .in +2 .nf # \fBzpool remove tank sdc\fR .fi .in -2 .sp .LP \fBExample 12 \fRCreating a ZFS Pool with Mirrored Separate Intent Logs .sp .LP The following command creates a ZFS storage pool consisting of two, two-way mirrors and mirrored log devices: .sp .in +2 .nf # \fBzpool create pool mirror sda sdb mirror sdc sdd log mirror \e sde sdf\fR .fi .in -2 .sp .LP \fBExample 13 \fRAdding Cache Devices to a ZFS Pool .sp .LP The following command adds two disks for use as cache devices to a ZFS storage pool: .sp .in +2 .nf # \fBzpool add pool cache sdc sdd\fR .fi .in -2 .sp .sp .LP Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the \fBiostat\fR option as follows: .sp .in +2 .nf # \fBzpool iostat -v pool 5\fR .fi .in -2 .sp .LP \fBExample 14 \fRRemoving a Mirrored Log Device .sp .LP The following command removes the mirrored log device \fBmirror-2\fR. .sp .LP Given this configuration: .sp .in +2 .nf pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdb ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 .fi .in -2 .sp .sp .LP The command to remove the mirrored log \fBmirror-2\fR is: .sp .in +2 .nf # \fBzpool remove tank mirror-2\fR .fi .in -2 .sp .LP \fBExample 15 \fRDisplaying expanded space on a device .sp .LP The following command displays the detailed information for the \fIdata\fR pool. This pool is comprised of a single \fIraidz\fR vdev where one of its devices increased its capacity by 10GB. In this example, the pool will not be able to utilized this extra capacity until all the devices under the \fIraidz\fR vdev have been expanded. .sp .in +2 .nf # \fBzpool list -v data\fR NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE - raidz1 23.9G 14.6G 9.30G 48% - c1t1d0 - - - - - c1t2d0 - - - - 10G c1t3d0 - - - - - .fi .in -2 .sp .LP \fBExample 16 \fRRunning commands in zpool status and zpool iostat with -c .sp .LP .sp .in +2 .nf # zpool status -c vendor,model,size,enc ... NAME STATE READ WRITE CKSUM vendor model size enc tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 0:0:0:0 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 0:0:0:0 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 0:0:0:0 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 0:0:0:0 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 0:0:0:0 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 0:0:0:0 .fi .in -2 .sp .in +2 .nf # zpool iostat -vc slaves,locate_led capacity operations bandwidth pool alloc free read write read write slaves locate_led ---------- ----- ----- ----- ----- ----- ----- --------- ---------- tank 20.4G 7.23T 26 152 20.7M 21.6M mirror 20.4G 7.23T 26 152 20.7M 21.6M U1 - - 0 31 1.46K 20.6M sdb sdff 0 U10 - - 0 1 3.77K 13.3K sdas sdgw 0 U11 - - 0 1 288K 13.3K sdat sdgx 1 U12 - - 0 1 78.4K 13.3K sdau sdgy 0 U13 - - 0 1 128K 13.3K sdav sdgz 0 U14 - - 0 1 63.2K 13.3K sdfk sdg 0 .fi .in -2 .SH EXIT STATUS .sp .LP The following exit values are returned: .sp .ne 2 .na \fB\fB0\fR\fR .ad .RS 5n Successful completion. .RE .sp .ne 2 .na \fB\fB1\fR\fR .ad .RS 5n An error occurred. .RE .sp .ne 2 .na \fB\fB2\fR\fR .ad .RS 5n Invalid command line options were specified. .RE .SH "ENVIRONMENT VARIABLES" .TP .B "ZFS_ABORT Cause \fBzpool\fR to dump core on exit for the purposes of running \fB::findleaks\fR. .TP .B "ZPOOL_IMPORT_PATH" The search path for devices or files to use with the pool. This is a colon-separated list of directories in which \fBzpool\fR looks for device nodes and files. Similar to the \fB-d\fR option in \fIzpool import\fR. .TP .B "ZPOOL_VDEV_NAME_GUID" Cause \fBzpool\fR subcommands to output vdev guids by default. This behavior is identical to the \fBzpool status -g\fR command line option. .TP .B "ZPOOL_VDEV_NAME_FOLLOW_LINKS" Cause \fBzpool\fR subcommands to follow links for vdev names by default. This behavior is identical to the \fBzpool status -L\fR command line option. .TP .B "ZPOOL_VDEV_NAME_PATH" Cause \fBzpool\fR subcommands to output full vdev path names by default. This behavior is identical to the \fBzpool status -p\fR command line option. .TP .B "ZFS_VDEV_DEVID_OPT_OUT" Older ZFS on Linux implementations had issues when attempting to display pool config VDEV names if a "devid" NVP value is present in the pool's config. For example, a pool that originated on illumos platform would have a devid value in the config and \fBzpool status\fR would fail when listing the config. This would also be true for future Linux based pools. A pool can be stripped of any "devid" values on import or prevented from adding them on \fBzpool create\fR or \fBzpool add\fR by setting ZFS_VDEV_DEVID_OPT_OUT. .TP .B "ZPOOL_SCRIPTS_AS_ROOT" Allow a privilaged user to run the \fBzpool status/iostat\fR with the \fB-c\fR option. Normally, only unprivilaged users are allowed to run \fB-c\fR. .SH SEE ALSO .sp .LP \fBzfs\fR(8), \fBzpool-features\fR(5), \fBzfs-events\fR(5), \fBzfs-module-parameters\fR(5)