summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--man/man5/Makefile.am2
-rw-r--r--man/man5/zfs-events.5749
2 files changed, 750 insertions, 1 deletions
diff --git a/man/man5/Makefile.am b/man/man5/Makefile.am
index fcb73f4a0..4746914c5 100644
--- a/man/man5/Makefile.am
+++ b/man/man5/Makefile.am
@@ -1,4 +1,4 @@
-dist_man_MANS = vdev_id.conf.5 zpool-features.5 zfs-module-parameters.5
+dist_man_MANS = vdev_id.conf.5 zpool-features.5 zfs-module-parameters.5 zfs-events.5
install-data-local:
$(INSTALL) -d -m 0755 "$(DESTDIR)$(mandir)/man5"
diff --git a/man/man5/zfs-events.5 b/man/man5/zfs-events.5
new file mode 100644
index 000000000..4b72484d4
--- /dev/null
+++ b/man/man5/zfs-events.5
@@ -0,0 +1,749 @@
+'\" te
+.\" Copyright (c) 2013 by Turbo Fredriksson <[email protected]>. All rights reserved.
+.\" The contents of this file are subject to the terms of the Common Development
+.\" and Distribution License (the "License"). You may not use this file except
+.\" in compliance with the License. You can obtain a copy of the license at
+.\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
+.\"
+.\" See the License for the specific language governing permissions and
+.\" limitations under the License. When distributing Covered Code, include this
+.\" CDDL HEADER in each file and include the License file at
+.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
+.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
+.\" own identifying information:
+.\" Portions Copyright [yyyy] [name of copyright owner]
+.TH ZFS-EVENTS 5 "Feb 6, 2014"
+.SH NAME
+zfs\-events \- Events created by the ZFS filesystem.
+.SH DESCRIPTION
+.sp
+.LP
+Description of the different events generated by the ZFS stack.
+.sp
+Most of these don't have any description. The events generated by ZFS
+have never been publicly documented. What is here is intended as a
+starting point to provide documentation for all possible events.
+.sp
+To view all events created since the loading of the ZFS infrastructure
+(i.e, "the module"), run
+.P
+.nf
+\fBzpool events\fR
+.fi
+.P
+to get a short list, and
+.P
+.nf
+\fBzpool events -v\fR
+.fi
+.P
+to get a full detail of the events and what information
+is available about it.
+.sp
+This man page lists the different subclasses that are issued
+in the case of an event. The full event name would be
+\fIereport.fs.zfs.SUBCLASS\fR, but we only list the last
+part here.
+
+.SS "EVENTS (SUBCLASS)"
+.sp
+.LP
+
+.sp
+.ne 2
+.na
+\fBchecksum\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBio\fR
+.ad
+.RS 12n
+Issued when there is an I/O error in a vdev in the pool.
+.RE
+
+.sp
+.ne 2
+.na
+\fBdata\fR
+.ad
+.RS 12n
+Issued when there have been data errors in the pool.
+.RE
+
+.sp
+.ne 2
+.na
+\fBdelay\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBconfig.sync\fR
+.ad
+.RS 12n
+Issued every time a vdev change have been done to the pool.
+.RE
+
+.sp
+.ne 2
+.na
+\fBzpool\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzpool.destroy\fR
+.ad
+.RS 12n
+Issued when a pool is destroyed.
+.RE
+
+.sp
+.ne 2
+.na
+\fBzpool.export\fR
+.ad
+.RS 12n
+Issued when a pool is exported.
+.RE
+
+.sp
+.ne 2
+.na
+\fBzpool.import\fR
+.ad
+.RS 12n
+Issued when a pool is imported.
+.RE
+
+.sp
+.ne 2
+.na
+\fBzpool.reguid\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.unknown\fR
+.ad
+.RS 12n
+Issued when the vdev is unknown. Such as trying to clear device
+errors on a vdev that have failed/been kicked from the system/pool
+and is no longer available.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.open_failed\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.corrupt_data\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.no_replicas\fR
+.ad
+.RS 12n
+Issued when there are no more replicas to sustain the pool.
+This would lead to the pool being \fIDEGRADED\fR.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.bad_guid_sum\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.too_small\fR
+.ad
+.RS 12n
+Issued when the system (kernel) have removed a device, and ZFS
+notices that the device isn't there any more. This is usually
+followed by a \fBprobe_failure\fR event.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.bad_label\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.bad_ashift\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.remove\fR
+.ad
+.RS 12n
+Issued when a vdev is detached from a mirror (or a spare detached from a
+vdev where it have been used to replace a failed drive - only works if
+the original drive have been readded).
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.clear\fR
+.ad
+.RS 12n
+Issued when clearing device errors in a pool. Such as running \fBzpool clear\fR
+on a device in the pool.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.check\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.spare\fR
+.ad
+.RS 12n
+Issued when a spare have kicked in to replace a failed device.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev.autoexpand\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBio_failure\fR
+.ad
+.RS 12n
+Issued when there is an I/O failure in a vdev in the pool.
+.RE
+
+.sp
+.ne 2
+.na
+\fBprobe_failure\fR
+.ad
+.RS 12n
+Issued when a probe fails on a vdev. This would occur if a vdeev
+have been kicked from the system outside of ZFS (such as the kernel
+have removed the device).
+.RE
+
+.sp
+.ne 2
+.na
+\fBlog_replay\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBresilver.start\fR
+.ad
+.RS 12n
+Issued when a resilver is started.
+.RE
+
+.sp
+.ne 2
+.na
+\fBresilver.finish\fR
+.ad
+.RS 12n
+Issued when the running resilver have finished.
+.RE
+
+.sp
+.ne 2
+.na
+\fBscrub.start\fR
+.ad
+.RS 12n
+Issued when a scrub is started on a pool.
+.RE
+
+.sp
+.ne 2
+.na
+\fBscrub.finish\fR
+.ad
+.RS 12n
+Issued when a pool have finished scrubbing.
+.RE
+
+.sp
+.ne 2
+.na
+\fBbootfs.vdev.attach\fR
+.ad
+.RS 12n
+.RE
+
+.SS "PAYLOAD"
+.sp
+.LP
+This is the payload (data, information) that accompanies an
+event.
+.sp
+For
+.BR zed (8),
+these are set to uppercase and prefixed with \fBZEVENT_\fR.
+
+.sp
+.ne 2
+.na
+\fBpool\fR
+.ad
+.RS 12n
+Pool name.
+.RE
+
+.sp
+.ne 2
+.na
+\fBpool_failmode\fR
+.ad
+.RS 12n
+Failmode - \fBwait\fR, \fBcontinue\fR or \fBpanic\fR.
+See
+.BR pool (8)
+(\fIfailmode\fR property) for more information.
+.RE
+
+.sp
+.ne 2
+.na
+\fBpool_guid\fR
+.ad
+.RS 12n
+The GUID of the pool.
+.RE
+
+.sp
+.ne 2
+.na
+\fBpool_context\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_guid\fR
+.ad
+.RS 12n
+The GUID of the vdev in question (the vdev failing or operated upon with
+\fBzpool clear\fR etc).
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_type\fR
+.ad
+.RS 12n
+Type of vdev - \fBdisk\fR, \fBfile\fR, \fBmirror\fR etc. See
+.BR zpool (8)
+under \fBVirtual Devices\fR for more information on possible values.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_path\fR
+.ad
+.RS 12n
+Full path of the vdev, including any \fI-partX\fR.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_devid\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_fru\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_state\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_ashift\fR
+.ad
+.RS 12n
+The ashift value of the vdev.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_complete_ts\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_delta_ts\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_spare_paths\fR
+.ad
+.RS 12n
+List of spares, including full path and any \fI-partX\fR.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_spare_guids\fR
+.ad
+.RS 12n
+GUID(s) of spares.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_read_errors\fR
+.ad
+.RS 12n
+How many read errors that have been detected on the vdev.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_write_errors\fR
+.ad
+.RS 12n
+How many write errors that have been detected on the vdev.
+.RE
+
+.sp
+.ne 2
+.na
+\fBvdev_cksum_errors\fR
+.ad
+.RS 12n
+How many checkum errors that have been detected on the vdev.
+.RE
+
+.sp
+.ne 2
+.na
+\fBparent_guid\fR
+.ad
+.RS 12n
+GUID of the vdev parent.
+.RE
+
+.sp
+.ne 2
+.na
+\fBparent_type\fR
+.ad
+.RS 12n
+Type of parent. See \fBvdev_type\fR.
+.RE
+
+.sp
+.ne 2
+.na
+\fBparent_path\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBparent_devid\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_objset\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_object\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_level\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_blkid\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_err\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_offset\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_size\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_flags\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_stage\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_pipeline\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_delay\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_timestamp\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_deadline\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBzio_delta\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBprev_state\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBcksum_expected\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBcksum_actual\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBcksum_algorithm\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBcksum_byteswap\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_ranges\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_ranges_min_gap\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_range_sets\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_range_clears\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_set_bits\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_cleared_bits\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_set_histogram\fR
+.ad
+.RS 12n
+.RE
+
+.sp
+.ne 2
+.na
+\fBbad_cleared_histogram\fR
+.ad
+.RS 12n
+.RE
+