diff options
author | Pavel Zakharov <[email protected]> | 2016-07-22 10:39:36 -0400 |
---|---|---|
committer | Brian Behlendorf <[email protected]> | 2018-05-08 21:35:27 -0700 |
commit | 6cb8e5306d9696d155ae7a808f56c4e46d69b64c (patch) | |
tree | c5c1f331a6341281fd3e9b7310e1618eab2c005c /module/zfs/zio.c | |
parent | afd2f7b7117ff8bf23afa70ecae86ec0c1a1461e (diff) |
OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery
Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:
7638 Refactor spa_load_impl into several functions
8961 SPA load/import should tell us why it failed
7277 zdb should be able to print zfs_dbgmsg's
To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.
The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.
The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.
When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.
This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.
With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.
Porting notes (ZTS):
* Fix 'make dist' target in zpool_import
* The maximum path length allowed by tar is 99 characters. Several
of the new test cases exceeded this limit resulting in them not
being included in the tarball. Shorten the names slightly.
* Set/get tunables using accessor functions.
* Get last synced txg via the "zfs_txg_history" mechanism.
* Clear zinject handlers in cleanup for import_cache_device_replaced
and import_rewind_device_replaced in order that the zpool can be
exported if there is an error.
* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
ext4 file system to be created on ZFS_DISK2. Also, there's
no need to partition ZFS_DISK2 at all. The partitioning had
already been disabled for multipath devices. Among other things,
the partitioning steals some space from the ext4 file system,
makes it difficult to accurately calculate the paramters to
parted and can make some of the tests fail.
* Increase FS_SIZE and FILE_SIZE in the zpool_import test
configuration now that FILESIZE is larger.
* Write more data in order that device evacuation take lonnger in
a couple tests.
* Use mkdir -p to avoid errors when the directory already exists.
* Remove use of sudo in import_rewind_config_changed.
Authored by: Pavel Zakharov <[email protected]>
Reviewed by: George Wilson <[email protected]>
Reviewed by: Matthew Ahrens <[email protected]>
Reviewed by: Andrew Stormont <[email protected]>
Approved by: Hans Rosenfeld <[email protected]>
Ported-by: Tim Chase <[email protected]>
Signed-off-by: Tim Chase <[email protected]>
OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
Diffstat (limited to 'module/zfs/zio.c')
-rw-r--r-- | module/zfs/zio.c | 49 |
1 files changed, 45 insertions, 4 deletions
diff --git a/module/zfs/zio.c b/module/zfs/zio.c index 6822505f1..81ae65c31 100644 --- a/module/zfs/zio.c +++ b/module/zfs/zio.c @@ -879,6 +879,13 @@ zfs_blkptr_verify(spa_t *spa, const blkptr_t *bp) } /* + * Do not verify individual DVAs if the config is not trusted. This + * will be done once the zio is executed in vdev_mirror_map_alloc. + */ + if (!spa->spa_trust_config) + return; + + /* * Pool-specific checks. * * Note: it would be nice to verify that the blk_birth and @@ -928,6 +935,36 @@ zfs_blkptr_verify(spa_t *spa, const blkptr_t *bp) } } +boolean_t +zfs_dva_valid(spa_t *spa, const dva_t *dva, const blkptr_t *bp) +{ + uint64_t vdevid = DVA_GET_VDEV(dva); + + if (vdevid >= spa->spa_root_vdev->vdev_children) + return (B_FALSE); + + vdev_t *vd = spa->spa_root_vdev->vdev_child[vdevid]; + if (vd == NULL) + return (B_FALSE); + + if (vd->vdev_ops == &vdev_hole_ops) + return (B_FALSE); + + if (vd->vdev_ops == &vdev_missing_ops) { + return (B_FALSE); + } + + uint64_t offset = DVA_GET_OFFSET(dva); + uint64_t asize = DVA_GET_ASIZE(dva); + + if (BP_IS_GANG(bp)) + asize = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE); + if (offset + asize > vd->vdev_asize) + return (B_FALSE); + + return (B_TRUE); +} + zio_t * zio_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, abd_t *data, uint64_t size, zio_done_func_t *done, void *private, @@ -3473,14 +3510,18 @@ zio_vdev_io_start(zio_t *zio) } ASSERT3P(zio->io_logical, !=, zio); - if (zio->io_type == ZIO_TYPE_WRITE && zio->io_vd->vdev_removing) { + if (zio->io_type == ZIO_TYPE_WRITE) { + ASSERT(spa->spa_trust_config); + /* * Note: the code can handle other kinds of writes, * but we don't expect them. */ - ASSERT(zio->io_flags & - (ZIO_FLAG_PHYSICAL | ZIO_FLAG_SELF_HEAL | - ZIO_FLAG_RESILVER | ZIO_FLAG_INDUCE_DAMAGE)); + if (zio->io_vd->vdev_removing) { + ASSERT(zio->io_flags & + (ZIO_FLAG_PHYSICAL | ZIO_FLAG_SELF_HEAL | + ZIO_FLAG_RESILVER | ZIO_FLAG_INDUCE_DAMAGE)); + } } align = 1ULL << vd->vdev_top->vdev_ashift; |