OpenZFS 9075 - Improve ZFS pool import/load process and corrupted pool recovery

Some work has been done lately to improve the debugability of the ZFS pool
load (and import) process. This includes:

	7638 Refactor spa_load_impl into several functions
	8961 SPA load/import should tell us why it failed
	7277 zdb should be able to print zfs_dbgmsg's

To iterate on top of that, there's a few changes that were made to make the
import process more resilient and crash free. One of the first tasks during the
pool load process is to parse a config provided from userland that describes
what devices the pool is composed of. A vdev tree is generated from that config,
and then all the vdevs are opened.

The Meta Object Set (MOS) of the pool is accessed, and several metadata objects
that are necessary to load the pool are read. The exact configuration of the
pool is also stored inside the MOS. Since the configuration provided from
userland is external and might not accurately describe the vdev tree
of the pool at the txg that is being loaded, it cannot be relied upon to safely
operate the pool. For that reason, the configuration in the MOS is read early
on. In the past, the two configurations were compared together and if there was
a mismatch then the load process was aborted and an error was returned.

The latter was a good way to ensure a pool does not get corrupted, however it
made the pool load process needlessly fragile in cases where the vdev
configuration changed or the userland configuration was outdated. Since the MOS
is stored in 3 copies, the configuration provided by userland doesn't have to be
perfect in order to read its contents. Hence, a new approach has been adopted:
The pool is first opened with the untrusted userland configuration just so that
the real configuration can be read from the MOS. The trusted MOS configuration
is then used to generate a new vdev tree and the pool is re-opened.

When the pool is opened with an untrusted configuration, writes are disabled
to avoid accidentally damaging it. During reads, some sanity checks are
performed on block pointers to see if each DVA points to a known vdev;
when the configuration is untrusted, instead of panicking the system if those
checks fail we simply avoid issuing reads to the invalid DVAs.

This new two-step pool load process now allows rewinding pools accross
vdev tree changes such as device replacement, addition, etc. Loading a pool
from an external config file in a clustering environment also becomes much
safer now since the pool will import even if the config is outdated and didn't,
for instance, register a recent device addition.

With this code in place, it became relatively easy to implement a
long-sought-after feature: the ability to import a pool with missing top level
(i.e. non-redundant) devices. Note that since this almost guarantees some loss
of data, this feature is for now restricted to a read-only import.

Porting notes (ZTS):
* Fix 'make dist' target in zpool_import

* The maximum path length allowed by tar is 99 characters.  Several
  of the new test cases exceeded this limit resulting in them not
  being included in the tarball.  Shorten the names slightly.

* Set/get tunables using accessor functions.

* Get last synced txg via the "zfs_txg_history" mechanism.

* Clear zinject handlers in cleanup for import_cache_device_replaced
  and import_rewind_device_replaced in order that the zpool can be
  exported if there is an error.

* Increase FILESIZE to 8G in zfs-test.sh to allow for a larger
  ext4 file system to be created on ZFS_DISK2.  Also, there's
  no need to partition ZFS_DISK2 at all.  The partitioning had
  already been disabled for multipath devices.  Among other things,
  the partitioning steals some space from the ext4 file system,
  makes it difficult to accurately calculate the paramters to
  parted and can make some of the tests fail.

* Increase FS_SIZE and FILE_SIZE in the zpool_import test
  configuration now that FILESIZE is larger.

* Write more data in order that device evacuation take lonnger in
  a couple tests.

* Use mkdir -p to avoid errors when the directory already exists.

* Remove use of sudo in import_rewind_config_changed.

Authored by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Approved by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Ported-by: Tim Chase <tim@chase2k.com>
Signed-off-by: Tim Chase <tim@chase2k.com>

OpenZFS-issue: https://illumos.org/issues/9075
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/619c0123
Closes #7459
This commit is contained in:
Pavel Zakharov 2016-07-22 10:39:36 -04:00 committed by Brian Behlendorf
parent afd2f7b711
commit 6cb8e5306d
35 changed files with 2858 additions and 570 deletions

View File

@ -1820,6 +1820,10 @@ print_status_config(zpool_handle_t *zhp, status_cbdata_t *cb, const char *name,
(void) printf(gettext("currently in use"));
break;
case VDEV_AUX_CHILDREN_OFFLINE:
(void) printf(gettext("all children offline"));
break;
default:
(void) printf(gettext("corrupted data"));
break;
@ -1919,6 +1923,10 @@ print_import_config(status_cbdata_t *cb, const char *name, nvlist_t *nv,
(void) printf(gettext("currently in use"));
break;
case VDEV_AUX_CHILDREN_OFFLINE:
(void) printf(gettext("all children offline"));
break;
default:
(void) printf(gettext("corrupted data"));
break;
@ -2752,6 +2760,7 @@ zpool_do_import(int argc, char **argv)
idata.guid = searchguid;
idata.cachefile = cachefile;
idata.scan = do_scan;
idata.policy = policy;
pools = zpool_search_import(g_zfs, &idata);

View File

@ -413,6 +413,7 @@ typedef struct importargs {
int unique : 1; /* does 'poolname' already exist? */
int exists : 1; /* set on return if pool already exists */
int scan : 1; /* prefer scanning to libblkid cache */
nvlist_t *policy; /* rewind policy (rewind txg, etc.) */
} importargs_t;
extern nvlist_t *zpool_search_import(libzfs_handle_t *, importargs_t *);

View File

@ -704,6 +704,7 @@ typedef struct zpool_rewind_policy {
#define ZPOOL_CONFIG_VDEV_TOP_ZAP "com.delphix:vdev_zap_top"
#define ZPOOL_CONFIG_VDEV_LEAF_ZAP "com.delphix:vdev_zap_leaf"
#define ZPOOL_CONFIG_HAS_PER_VDEV_ZAPS "com.delphix:has_per_vdev_zaps"
#define ZPOOL_CONFIG_CACHEFILE "cachefile" /* not stored on disk */
#define ZPOOL_CONFIG_MMP_STATE "mmp_state" /* not stored on disk */
#define ZPOOL_CONFIG_MMP_TXG "mmp_txg" /* not stored on disk */
#define ZPOOL_CONFIG_MMP_HOSTNAME "mmp_hostname" /* not stored on disk */
@ -811,6 +812,7 @@ typedef enum vdev_aux {
VDEV_AUX_BAD_ASHIFT, /* vdev ashift is invalid */
VDEV_AUX_EXTERNAL_PERSIST, /* persistent forced fault */
VDEV_AUX_ACTIVE, /* vdev active on a different host */
VDEV_AUX_CHILDREN_OFFLINE, /* all children are offline */
} vdev_aux_t;
/*

View File

@ -410,6 +410,7 @@ typedef enum bp_embedded_type {
#define SPA_BLKPTRSHIFT 7 /* blkptr_t is 128 bytes */
#define SPA_DVAS_PER_BP 3 /* Number of DVAs in a bp */
#define SPA_SYNC_MIN_VDEVS 3 /* min vdevs to update during sync */
/*
* A block is a hole when it has either 1) never been written to, or
@ -1015,11 +1016,16 @@ extern boolean_t spa_has_pending_synctask(spa_t *spa);
extern int spa_maxblocksize(spa_t *spa);
extern int spa_maxdnodesize(spa_t *spa);
extern void zfs_blkptr_verify(spa_t *spa, const blkptr_t *bp);
extern boolean_t zfs_dva_valid(spa_t *spa, const dva_t *dva,
const blkptr_t *bp);
typedef void (*spa_remap_cb_t)(uint64_t vdev, uint64_t offset, uint64_t size,
void *arg);
extern boolean_t spa_remap_blkptr(spa_t *spa, blkptr_t *bp,
spa_remap_cb_t callback, void *arg);
extern uint64_t spa_get_last_removal_txg(spa_t *spa);
extern boolean_t spa_trust_config(spa_t *spa);
extern uint64_t spa_missing_tvds_allowed(spa_t *spa);
extern void spa_set_missing_tvds(spa_t *spa, uint64_t missing);
extern boolean_t spa_multihost(spa_t *spa);
extern unsigned long spa_get_hostid(void);

View File

@ -184,6 +184,15 @@ typedef enum spa_all_vdev_zap_action {
AVZ_ACTION_INITIALIZE
} spa_avz_action_t;
typedef enum spa_config_source {
SPA_CONFIG_SRC_NONE = 0,
SPA_CONFIG_SRC_SCAN, /* scan of path (default: /dev/dsk) */
SPA_CONFIG_SRC_CACHEFILE, /* any cachefile */
SPA_CONFIG_SRC_TRYIMPORT, /* returned from call to tryimport */
SPA_CONFIG_SRC_SPLIT, /* new pool in a pool split */
SPA_CONFIG_SRC_MOS /* MOS, but not always from right txg */
} spa_config_source_t;
struct spa {
/*
* Fields protected by spa_namespace_lock.
@ -202,6 +211,8 @@ struct spa {
uint8_t spa_sync_on; /* sync threads are running */
spa_load_state_t spa_load_state; /* current load operation */
boolean_t spa_indirect_vdevs_loaded; /* mappings loaded? */
boolean_t spa_trust_config; /* do we trust vdev tree? */
spa_config_source_t spa_config_source; /* where config comes from? */
uint64_t spa_import_flags; /* import specific flags */
spa_taskqs_t spa_zio_taskq[ZIO_TYPES][ZIO_TASKQ_TYPES];
dsl_pool_t *spa_dsl_pool;
@ -263,6 +274,8 @@ struct spa {
int spa_async_suspended; /* async tasks suspended */
kcondvar_t spa_async_cv; /* wait for thread_exit() */
uint16_t spa_async_tasks; /* async task mask */
uint64_t spa_missing_tvds; /* unopenable tvds on load */
uint64_t spa_missing_tvds_allowed; /* allow loading spa? */
spa_removing_phys_t spa_removing_phys;
spa_vdev_removal_t *spa_vdev_removal;

View File

@ -48,9 +48,12 @@ typedef enum vdev_dtl_type {
extern int zfs_nocacheflush;
extern void vdev_dbgmsg(vdev_t *vd, const char *fmt, ...);
extern void vdev_dbgmsg_print_tree(vdev_t *, int);
extern int vdev_open(vdev_t *);
extern void vdev_open_children(vdev_t *);
extern int vdev_validate(vdev_t *, boolean_t);
extern int vdev_validate(vdev_t *);
extern int vdev_copy_path_strict(vdev_t *, vdev_t *);
extern void vdev_copy_path_relaxed(vdev_t *, vdev_t *);
extern void vdev_close(vdev_t *);
extern int vdev_create(vdev_t *, uint64_t txg, boolean_t isreplace);
extern void vdev_reopen(vdev_t *);
@ -100,6 +103,7 @@ extern void vdev_scan_stat_init(vdev_t *vd);
extern void vdev_propagate_state(vdev_t *vd);
extern void vdev_set_state(vdev_t *vd, boolean_t isopen, vdev_state_t state,
vdev_aux_t aux);
extern boolean_t vdev_children_are_offline(vdev_t *vd);
extern void vdev_space_update(vdev_t *vd,
int64_t alloc_delta, int64_t defer_delta, int64_t space_delta);
@ -145,7 +149,8 @@ typedef enum vdev_config_flag {
VDEV_CONFIG_SPARE = 1 << 0,
VDEV_CONFIG_L2CACHE = 1 << 1,
VDEV_CONFIG_REMOVING = 1 << 2,
VDEV_CONFIG_MOS = 1 << 3
VDEV_CONFIG_MOS = 1 << 3,
VDEV_CONFIG_MISSING = 1 << 4
} vdev_config_flag_t;
extern void vdev_top_config_generate(spa_t *spa, nvlist_t *config);

View File

@ -437,7 +437,6 @@ extern void vdev_remove_parent(vdev_t *cvd);
/*
* vdev sync load and sync
*/
extern void vdev_load_log_state(vdev_t *nvd, vdev_t *ovd);
extern boolean_t vdev_log_state_valid(vdev_t *vd);
extern int vdev_load(vdev_t *vd);
extern int vdev_dtl_load(vdev_t *vd);

View File

@ -674,6 +674,7 @@ typedef struct callb_cpr {
#define zone_dataset_visible(x, y) (1)
#define INGLOBALZONE(z) (1)
extern uint32_t zone_get_hostid(void *zonep);
extern char *kmem_vasprintf(const char *fmt, va_list adx);
extern char *kmem_asprintf(const char *fmt, ...);

View File

@ -897,7 +897,8 @@ vdev_is_hole(uint64_t *hole_array, uint_t holes, uint_t id)
* return to the user.
*/
static nvlist_t *
get_configs(libzfs_handle_t *hdl, pool_list_t *pl, boolean_t active_ok)
get_configs(libzfs_handle_t *hdl, pool_list_t *pl, boolean_t active_ok,
nvlist_t *policy)
{
pool_entry_t *pe;
vdev_entry_t *ve;
@ -1230,6 +1231,12 @@ get_configs(libzfs_handle_t *hdl, pool_list_t *pl, boolean_t active_ok)
continue;
}
if (policy != NULL) {
if (nvlist_add_nvlist(config, ZPOOL_REWIND_POLICY,
policy) != 0)
goto nomem;
}
if ((nvl = refresh_config(hdl, config)) == NULL) {
nvlist_free(config);
config = NULL;
@ -2080,7 +2087,7 @@ zpool_find_import_impl(libzfs_handle_t *hdl, importargs_t *iarg)
free(cache);
pthread_mutex_destroy(&lock);
ret = get_configs(hdl, &pools, iarg->can_be_active);
ret = get_configs(hdl, &pools, iarg->can_be_active, iarg->policy);
for (pe = pools.pools; pe != NULL; pe = penext) {
penext = pe->pe_next;
@ -2209,6 +2216,14 @@ zpool_find_import_cached(libzfs_handle_t *hdl, const char *cachefile,
if (active)
continue;
if (nvlist_add_string(src, ZPOOL_CONFIG_CACHEFILE,
cachefile) != 0) {
(void) no_memory(hdl);
nvlist_free(raw);
nvlist_free(pools);
return (NULL);
}
if ((dst = refresh_config(hdl, src)) == NULL) {
nvlist_free(raw);
nvlist_free(pools);

View File

@ -1935,8 +1935,9 @@ zpool_import_props(libzfs_handle_t *hdl, nvlist_t *config, const char *newname,
nvlist_lookup_nvlist(nvinfo,
ZPOOL_CONFIG_MISSING_DEVICES, &missing) == 0) {
(void) printf(dgettext(TEXT_DOMAIN,
"The devices below are missing, use "
"'-m' to import the pool anyway:\n"));
"The devices below are missing or "
"corrupted, use '-m' to import the pool "
"anyway:\n"));
print_vdev_tree(hdl, NULL, missing, 2);
(void) printf("\n");
}

View File

@ -297,6 +297,16 @@ rw_tryenter(krwlock_t *rwlp, krw_t rw)
return (0);
}
/* ARGSUSED */
uint32_t
zone_get_hostid(void *zonep)
{
/*
* We're emulating the system's hostid in userland.
*/
return (strtoul(hw_serial, NULL, 10));
}
int
rw_tryupgrade(krwlock_t *rwlp)
{

View File

@ -351,6 +351,18 @@ they operate close to quota or capacity limits.
Default value: \fB24\fR.
.RE
.sp
.ne 2
.na
\fBspa_load_print_vdev_tree\fR (int)
.ad
.RS 12n
Whether to print the vdev tree in the debugging message buffer during pool import.
Use 0 to disable and 1 to enable.
.sp
Default value: \fB0\fR.
.RE
.sp
.ne 2
.na
@ -701,6 +713,18 @@ the code that may use them. A value of \fB0\fR will default to 6000 ms.
Default value: \fB0\fR.
.RE
.sp
.ne 2
.na
\fBzfs_max_missing_tvds\fR (int)
.ad
.RS 12n
Number of missing top-level vdevs which will be allowed during
pool import (only in read-only mode).
.sp
Default value: \fB0\fR
.RE
.sp
.ne 2
.na

File diff suppressed because it is too large Load Diff

View File

@ -393,7 +393,8 @@ void
spa_config_set(spa_t *spa, nvlist_t *config)
{
mutex_enter(&spa->spa_props_lock);
nvlist_free(spa->spa_config);
if (spa->spa_config != NULL && spa->spa_config != config)
nvlist_free(spa->spa_config);
spa->spa_config = config;
mutex_exit(&spa->spa_props_lock);
}

View File

@ -384,7 +384,8 @@ spa_load_failed(spa_t *spa, const char *fmt, ...)
(void) vsnprintf(buf, sizeof (buf), fmt, adx);
va_end(adx);
zfs_dbgmsg("spa_load(%s): FAILED: %s", spa->spa_name, buf);
zfs_dbgmsg("spa_load(%s, config %s): FAILED: %s", spa->spa_name,
spa->spa_trust_config ? "trusted" : "untrusted", buf);
}
/*PRINTFLIKE2*/
@ -398,7 +399,8 @@ spa_load_note(spa_t *spa, const char *fmt, ...)
(void) vsnprintf(buf, sizeof (buf), fmt, adx);
va_end(adx);
zfs_dbgmsg("spa_load(%s): %s", spa->spa_name, buf);
zfs_dbgmsg("spa_load(%s, config %s): %s", spa->spa_name,
spa->spa_trust_config ? "trusted" : "untrusted", buf);
}
/*
@ -637,6 +639,7 @@ spa_add(const char *name, nvlist_t *config, const char *altroot)
spa->spa_load_max_txg = UINT64_MAX;
spa->spa_proc = &p0;
spa->spa_proc_state = SPA_PROC_NONE;
spa->spa_trust_config = B_TRUE;
spa->spa_deadman_synctime = MSEC2NSEC(zfs_deadman_synctime_ms);
spa->spa_deadman_ziotime = MSEC2NSEC(zfs_deadman_ziotime_ms);
@ -2052,7 +2055,7 @@ spa_is_root(spa_t *spa)
boolean_t
spa_writeable(spa_t *spa)
{
return (!!(spa->spa_mode & FWRITE));
return (!!(spa->spa_mode & FWRITE) && spa->spa_trust_config);
}
/*
@ -2233,6 +2236,24 @@ spa_get_hostid(void)
return (myhostid);
}
boolean_t
spa_trust_config(spa_t *spa)
{
return (spa->spa_trust_config);
}
uint64_t
spa_missing_tvds_allowed(spa_t *spa)
{
return (spa->spa_missing_tvds_allowed);
}
void
spa_set_missing_tvds(spa_t *spa, uint64_t missing)
{
spa->spa_missing_tvds = missing;
}
#if defined(_KERNEL) && defined(HAVE_SPL)
#include <linux/mod_compat.h>
@ -2338,6 +2359,9 @@ EXPORT_SYMBOL(spa_is_root);
EXPORT_SYMBOL(spa_writeable);
EXPORT_SYMBOL(spa_mode);
EXPORT_SYMBOL(spa_namespace_lock);
EXPORT_SYMBOL(spa_trust_config);
EXPORT_SYMBOL(spa_missing_tvds_allowed);
EXPORT_SYMBOL(spa_set_missing_tvds);
/* BEGIN CSTYLED */
module_param(zfs_flags, uint, 0644);

View File

@ -74,6 +74,8 @@ unsigned int zfs_checksums_per_second = 20;
*/
int zfs_scan_ignore_errors = 0;
int vdev_validate_skip = B_FALSE;
/*PRINTFLIKE2*/
void
vdev_dbgmsg(vdev_t *vd, const char *fmt, ...)
@ -96,6 +98,57 @@ vdev_dbgmsg(vdev_t *vd, const char *fmt, ...)
}
}
void
vdev_dbgmsg_print_tree(vdev_t *vd, int indent)
{
char state[20];
if (vd->vdev_ishole || vd->vdev_ops == &vdev_missing_ops) {
zfs_dbgmsg("%*svdev %u: %s", indent, "", vd->vdev_id,
vd->vdev_ops->vdev_op_type);
return;
}
switch (vd->vdev_state) {
case VDEV_STATE_UNKNOWN:
(void) snprintf(state, sizeof (state), "unknown");
break;
case VDEV_STATE_CLOSED:
(void) snprintf(state, sizeof (state), "closed");
break;
case VDEV_STATE_OFFLINE:
(void) snprintf(state, sizeof (state), "offline");
break;
case VDEV_STATE_REMOVED:
(void) snprintf(state, sizeof (state), "removed");
break;
case VDEV_STATE_CANT_OPEN:
(void) snprintf(state, sizeof (state), "can't open");
break;
case VDEV_STATE_FAULTED:
(void) snprintf(state, sizeof (state), "faulted");
break;
case VDEV_STATE_DEGRADED:
(void) snprintf(state, sizeof (state), "degraded");
break;
case VDEV_STATE_HEALTHY:
(void) snprintf(state, sizeof (state), "healthy");
break;
default:
(void) snprintf(state, sizeof (state), "<state %u>",
(uint_t)vd->vdev_state);
}
zfs_dbgmsg("%*svdev %u: %s%s, guid: %llu, path: %s, %s", indent,
"", vd->vdev_id, vd->vdev_ops->vdev_op_type,
vd->vdev_islog ? " (log)" : "",
(u_longlong_t)vd->vdev_guid,
vd->vdev_path ? vd->vdev_path : "N/A", state);
for (uint64_t i = 0; i < vd->vdev_children; i++)
vdev_dbgmsg_print_tree(vd->vdev_child[i], indent + 2);
}
/*
* Virtual device management.
*/
@ -1424,8 +1477,13 @@ vdev_open(vdev_t *vd)
vd->vdev_stat.vs_aux != VDEV_AUX_OPEN_FAILED)
vd->vdev_removed = B_FALSE;
vdev_set_state(vd, B_TRUE, VDEV_STATE_CANT_OPEN,
vd->vdev_stat.vs_aux);
if (vd->vdev_stat.vs_aux == VDEV_AUX_CHILDREN_OFFLINE) {
vdev_set_state(vd, B_TRUE, VDEV_STATE_OFFLINE,
vd->vdev_stat.vs_aux);
} else {
vdev_set_state(vd, B_TRUE, VDEV_STATE_CANT_OPEN,
vd->vdev_stat.vs_aux);
}
return (error);
}
@ -1596,29 +1654,29 @@ vdev_open(vdev_t *vd)
/*
* Called once the vdevs are all opened, this routine validates the label
* contents. This needs to be done before vdev_load() so that we don't
* contents. This needs to be done before vdev_load() so that we don't
* inadvertently do repair I/Os to the wrong device.
*
* If 'strict' is false ignore the spa guid check. This is necessary because
* if the machine crashed during a re-guid the new guid might have been written
* to all of the vdev labels, but not the cached config. The strict check
* will be performed when the pool is opened again using the mos config.
*
* This function will only return failure if one of the vdevs indicates that it
* has since been destroyed or exported. This is only possible if
* /etc/zfs/zpool.cache was readonly at the time. Otherwise, the vdev state
* will be updated but the function will return 0.
*/
int
vdev_validate(vdev_t *vd, boolean_t strict)
vdev_validate(vdev_t *vd)
{
spa_t *spa = vd->vdev_spa;
nvlist_t *label;
uint64_t guid = 0, top_guid;
uint64_t guid = 0, aux_guid = 0, top_guid;
uint64_t state;
nvlist_t *nvl;
uint64_t txg;
for (int c = 0; c < vd->vdev_children; c++)
if (vdev_validate(vd->vdev_child[c], strict) != 0)
if (vdev_validate_skip)
return (0);
for (uint64_t c = 0; c < vd->vdev_children; c++)
if (vdev_validate(vd->vdev_child[c]) != 0)
return (SET_ERROR(EBADF));
/*
@ -1626,115 +1684,276 @@ vdev_validate(vdev_t *vd, boolean_t strict)
* any further validation. Otherwise, label I/O will fail and we will
* overwrite the previous state.
*/
if (vd->vdev_ops->vdev_op_leaf && vdev_readable(vd)) {
uint64_t aux_guid = 0;
nvlist_t *nvl;
uint64_t txg = spa_last_synced_txg(spa) != 0 ?
spa_last_synced_txg(spa) : -1ULL;
if (!vd->vdev_ops->vdev_op_leaf || !vdev_readable(vd))
return (0);
if ((label = vdev_label_read_config(vd, txg)) == NULL) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_BAD_LABEL);
vdev_dbgmsg(vd, "vdev_validate: failed reading config");
return (0);
}
/*
* If we are performing an extreme rewind, we allow for a label that
* was modified at a point after the current txg.
*/
if (spa->spa_extreme_rewind || spa_last_synced_txg(spa) == 0)
txg = UINT64_MAX;
else
txg = spa_last_synced_txg(spa);
/*
* Determine if this vdev has been split off into another
* pool. If so, then refuse to open it.
*/
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_SPLIT_GUID,
&aux_guid) == 0 && aux_guid == spa_guid(spa)) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_SPLIT_POOL);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: vdev split into other "
"pool");
return (0);
}
if (strict && (nvlist_lookup_uint64(label,
ZPOOL_CONFIG_POOL_GUID, &guid) != 0 ||
guid != spa_guid(spa))) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: vdev label pool_guid "
"doesn't match config (%llu != %llu)",
(u_longlong_t)guid,
(u_longlong_t)spa_guid(spa));
return (0);
}
if (nvlist_lookup_nvlist(label, ZPOOL_CONFIG_VDEV_TREE, &nvl)
!= 0 || nvlist_lookup_uint64(nvl, ZPOOL_CONFIG_ORIG_GUID,
&aux_guid) != 0)
aux_guid = 0;
/*
* If this vdev just became a top-level vdev because its
* sibling was detached, it will have adopted the parent's
* vdev guid -- but the label may or may not be on disk yet.
* Fortunately, either version of the label will have the
* same top guid, so if we're a top-level vdev, we can
* safely compare to that instead.
*
* If we split this vdev off instead, then we also check the
* original pool's guid. We don't want to consider the vdev
* corrupt if it is partway through a split operation.
*/
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_GUID,
&guid) != 0 ||
nvlist_lookup_uint64(label, ZPOOL_CONFIG_TOP_GUID,
&top_guid) != 0 ||
((vd->vdev_guid != guid && vd->vdev_guid != aux_guid) &&
(vd->vdev_guid != top_guid || vd != vd->vdev_top))) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: config guid doesn't "
"match label guid (%llu != %llu)",
(u_longlong_t)vd->vdev_guid, (u_longlong_t)guid);
return (0);
}
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_STATE,
&state) != 0) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: '%s' missing",
ZPOOL_CONFIG_POOL_STATE);
return (0);
}
nvlist_free(label);
/*
* If this is a verbatim import, no need to check the
* state of the pool.
*/
if (!(spa->spa_import_flags & ZFS_IMPORT_VERBATIM) &&
spa_load_state(spa) == SPA_LOAD_OPEN &&
state != POOL_STATE_ACTIVE) {
vdev_dbgmsg(vd, "vdev_validate: invalid pool state "
"(%llu) for spa %s", (u_longlong_t)state,
spa->spa_name);
return (SET_ERROR(EBADF));
}
/*
* If we were able to open and validate a vdev that was
* previously marked permanently unavailable, clear that state
* now.
*/
if (vd->vdev_not_present)
vd->vdev_not_present = 0;
if ((label = vdev_label_read_config(vd, txg)) == NULL) {
vdev_set_state(vd, B_TRUE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_BAD_LABEL);
vdev_dbgmsg(vd, "vdev_validate: failed reading config");
return (0);
}
/*
* Determine if this vdev has been split off into another
* pool. If so, then refuse to open it.
*/
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_SPLIT_GUID,
&aux_guid) == 0 && aux_guid == spa_guid(spa)) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_SPLIT_POOL);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: vdev split into other pool");
return (0);
}
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_GUID, &guid) != 0) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: '%s' missing from label",
ZPOOL_CONFIG_POOL_GUID);
return (0);
}
/*
* If config is not trusted then ignore the spa guid check. This is
* necessary because if the machine crashed during a re-guid the new
* guid might have been written to all of the vdev labels, but not the
* cached config. The check will be performed again once we have the
* trusted config from the MOS.
*/
if (spa->spa_trust_config && guid != spa_guid(spa)) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: vdev label pool_guid doesn't "
"match config (%llu != %llu)", (u_longlong_t)guid,
(u_longlong_t)spa_guid(spa));
return (0);
}
if (nvlist_lookup_nvlist(label, ZPOOL_CONFIG_VDEV_TREE, &nvl)
!= 0 || nvlist_lookup_uint64(nvl, ZPOOL_CONFIG_ORIG_GUID,
&aux_guid) != 0)
aux_guid = 0;
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_GUID, &guid) != 0) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: '%s' missing from label",
ZPOOL_CONFIG_GUID);
return (0);
}
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_TOP_GUID, &top_guid)
!= 0) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: '%s' missing from label",
ZPOOL_CONFIG_TOP_GUID);
return (0);
}
/*
* If this vdev just became a top-level vdev because its sibling was
* detached, it will have adopted the parent's vdev guid -- but the
* label may or may not be on disk yet. Fortunately, either version
* of the label will have the same top guid, so if we're a top-level
* vdev, we can safely compare to that instead.
* However, if the config comes from a cachefile that failed to update
* after the detach, a top-level vdev will appear as a non top-level
* vdev in the config. Also relax the constraints if we perform an
* extreme rewind.
*
* If we split this vdev off instead, then we also check the
* original pool's guid. We don't want to consider the vdev
* corrupt if it is partway through a split operation.
*/
if (vd->vdev_guid != guid && vd->vdev_guid != aux_guid) {
boolean_t mismatch = B_FALSE;
if (spa->spa_trust_config && !spa->spa_extreme_rewind) {
if (vd != vd->vdev_top || vd->vdev_guid != top_guid)
mismatch = B_TRUE;
} else {
if (vd->vdev_guid != top_guid &&
vd->vdev_top->vdev_guid != guid)
mismatch = B_TRUE;
}
if (mismatch) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: config guid "
"doesn't match label guid");
vdev_dbgmsg(vd, "CONFIG: guid %llu, top_guid %llu",
(u_longlong_t)vd->vdev_guid,
(u_longlong_t)vd->vdev_top->vdev_guid);
vdev_dbgmsg(vd, "LABEL: guid %llu, top_guid %llu, "
"aux_guid %llu", (u_longlong_t)guid,
(u_longlong_t)top_guid, (u_longlong_t)aux_guid);
return (0);
}
}
if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_STATE,
&state) != 0) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_CORRUPT_DATA);
nvlist_free(label);
vdev_dbgmsg(vd, "vdev_validate: '%s' missing from label",
ZPOOL_CONFIG_POOL_STATE);
return (0);
}
nvlist_free(label);
/*
* If this is a verbatim import, no need to check the
* state of the pool.
*/
if (!(spa->spa_import_flags & ZFS_IMPORT_VERBATIM) &&
spa_load_state(spa) == SPA_LOAD_OPEN &&
state != POOL_STATE_ACTIVE) {
vdev_dbgmsg(vd, "vdev_validate: invalid pool state (%llu) "
"for spa %s", (u_longlong_t)state, spa->spa_name);
return (SET_ERROR(EBADF));
}
/*
* If we were able to open and validate a vdev that was
* previously marked permanently unavailable, clear that state
* now.
*/
if (vd->vdev_not_present)
vd->vdev_not_present = 0;
return (0);
}
static void
vdev_copy_path_impl(vdev_t *svd, vdev_t *dvd)
{
if (svd->vdev_path != NULL && dvd->vdev_path != NULL) {
if (strcmp(svd->vdev_path, dvd->vdev_path) != 0) {
zfs_dbgmsg("vdev_copy_path: vdev %llu: path changed "
"from '%s' to '%s'", (u_longlong_t)dvd->vdev_guid,
dvd->vdev_path, svd->vdev_path);
spa_strfree(dvd->vdev_path);
dvd->vdev_path = spa_strdup(svd->vdev_path);
}
} else if (svd->vdev_path != NULL) {
dvd->vdev_path = spa_strdup(svd->vdev_path);
zfs_dbgmsg("vdev_copy_path: vdev %llu: path set to '%s'",
(u_longlong_t)dvd->vdev_guid, dvd->vdev_path);
}
}
/*
* Recursively copy vdev paths from one vdev to another. Source and destination
* vdev trees must have same geometry otherwise return error. Intended to copy
* paths from userland config into MOS config.
*/
int
vdev_copy_path_strict(vdev_t *svd, vdev_t *dvd)
{
if ((svd->vdev_ops == &vdev_missing_ops) ||
(svd->vdev_ishole && dvd->vdev_ishole) ||
(dvd->vdev_ops == &vdev_indirect_ops))
return (0);
if (svd->vdev_ops != dvd->vdev_ops) {
vdev_dbgmsg(svd, "vdev_copy_path: vdev type mismatch: %s != %s",
svd->vdev_ops->vdev_op_type, dvd->vdev_ops->vdev_op_type);
return (SET_ERROR(EINVAL));
}
if (svd->vdev_guid != dvd->vdev_guid) {
vdev_dbgmsg(svd, "vdev_copy_path: guids mismatch (%llu != "
"%llu)", (u_longlong_t)svd->vdev_guid,
(u_longlong_t)dvd->vdev_guid);
return (SET_ERROR(EINVAL));
}
if (svd->vdev_children != dvd->vdev_children) {
vdev_dbgmsg(svd, "vdev_copy_path: children count mismatch: "
"%llu != %llu", (u_longlong_t)svd->vdev_children,
(u_longlong_t)dvd->vdev_children);
return (SET_ERROR(EINVAL));
}
for (uint64_t i = 0; i < svd->vdev_children; i++) {
int error = vdev_copy_path_strict(svd->vdev_child[i],
dvd->vdev_child[i]);
if (error != 0)
return (error);
}
if (svd->vdev_ops->vdev_op_leaf)
vdev_copy_path_impl(svd, dvd);
return (0);
}
static void
vdev_copy_path_search(vdev_t *stvd, vdev_t *dvd)
{
ASSERT(stvd->vdev_top == stvd);
ASSERT3U(stvd->vdev_id, ==, dvd->vdev_top->vdev_id);
for (uint64_t i = 0; i < dvd->vdev_children; i++) {
vdev_copy_path_search(stvd, dvd->vdev_child[i]);
}
if (!dvd->vdev_ops->vdev_op_leaf || !vdev_is_concrete(dvd))
return;
/*
* The idea here is that while a vdev can shift positions within
* a top vdev (when replacing, attaching mirror, etc.) it cannot
* step outside of it.
*/
vdev_t *vd = vdev_lookup_by_guid(stvd, dvd->vdev_guid);
if (vd == NULL || vd->vdev_ops != dvd->vdev_ops)
return;
ASSERT(vd->vdev_ops->vdev_op_leaf);
vdev_copy_path_impl(vd, dvd);
}
/*
* Recursively copy vdev paths from one root vdev to another. Source and
* destination vdev trees may differ in geometry. For each destination leaf
* vdev, search a vdev with the same guid and top vdev id in the source.
* Intended to copy paths from userland config into MOS config.
*/
void
vdev_copy_path_relaxed(vdev_t *srvd, vdev_t *drvd)
{
uint64_t children = MIN(srvd->vdev_children, drvd->vdev_children);
ASSERT(srvd->vdev_ops == &vdev_root_ops);
ASSERT(drvd->vdev_ops == &vdev_root_ops);
for (uint64_t i = 0; i < children; i++) {
vdev_copy_path_search(srvd->vdev_child[i],
drvd->vdev_child[i]);
}
}
/*
* Close a virtual device.
*/
@ -1828,7 +2047,7 @@ vdev_reopen(vdev_t *vd)
!l2arc_vdev_present(vd))
l2arc_add_vdev(spa, vd);
} else {
(void) vdev_validate(vd, B_TRUE);
(void) vdev_validate(vd);
}
/*
@ -3873,6 +4092,19 @@ vdev_set_state(vdev_t *vd, boolean_t isopen, vdev_state_t state, vdev_aux_t aux)
vdev_propagate_state(vd->vdev_parent);
}
boolean_t
vdev_children_are_offline(vdev_t *vd)
{
ASSERT(!vd->vdev_ops->vdev_op_leaf);
for (uint64_t i = 0; i < vd->vdev_children; i++) {
if (vd->vdev_child[i]->vdev_state != VDEV_STATE_OFFLINE)
return (B_FALSE);
}
return (B_TRUE);
}
/*
* Check the vdev configuration to ensure that it's capable of supporting
* a root pool. We do not support partial configuration.
@ -3908,34 +4140,6 @@ vdev_is_concrete(vdev_t *vd)
}
}
/*
* Load the state from the original vdev tree (ovd) which
* we've retrieved from the MOS config object. If the original
* vdev was offline or faulted then we transfer that state to the
* device in the current vdev tree (nvd).
*/
void
vdev_load_log_state(vdev_t *nvd, vdev_t *ovd)
{
ASSERT(nvd->vdev_top->vdev_islog);
ASSERT(spa_config_held(nvd->vdev_spa,
SCL_STATE_ALL, RW_WRITER) == SCL_STATE_ALL);
ASSERT3U(nvd->vdev_guid, ==, ovd->vdev_guid);
for (int c = 0; c < nvd->vdev_children; c++)
vdev_load_log_state(nvd->vdev_child[c], ovd->vdev_child[c]);
if (nvd->vdev_ops->vdev_op_leaf) {
/*
* Restore the persistent vdev state
*/
nvd->vdev_offline = ovd->vdev_offline;
nvd->vdev_faulted = ovd->vdev_faulted;
nvd->vdev_degraded = ovd->vdev_degraded;
nvd->vdev_removed = ovd->vdev_removed;
}
}
/*
* Determine if a log device has valid content. If the vdev was
* removed or faulted in the MOS config then we know that
@ -4051,5 +4255,9 @@ module_param(zfs_checksums_per_second, uint, 0644);
module_param(zfs_scan_ignore_errors, int, 0644);
MODULE_PARM_DESC(zfs_scan_ignore_errors,
"Ignore errors during resilver/scrub");
module_param(vdev_validate_skip, int, 0644);
MODULE_PARM_DESC(vdev_validate_skip,
"Bypass vdev_validate()");
/* END CSTYLED */
#endif

View File

@ -412,7 +412,7 @@ vdev_config_generate(spa_t *spa, vdev_t *vd, boolean_t getstats,
fnvlist_add_uint64(nv, ZPOOL_CONFIG_WHOLE_DISK,
vd->vdev_wholedisk);
if (vd->vdev_not_present)
if (vd->vdev_not_present && !(flags & VDEV_CONFIG_MISSING))
fnvlist_add_uint64(nv, ZPOOL_CONFIG_NOT_PRESENT, 1);
if (vd->vdev_isspare)
@ -1209,6 +1209,11 @@ vdev_uberblock_load(vdev_t *rvd, uberblock_t *ub, nvlist_t **config)
"txg %llu", spa->spa_name, (u_longlong_t)ub->ub_txg);
*config = vdev_label_read_config(cb.ubl_vd, ub->ub_txg);
if (*config == NULL && spa->spa_extreme_rewind) {
vdev_dbgmsg(cb.ubl_vd, "failed to read label config. "
"Trying again without txg restrictions.");
*config = vdev_label_read_config(cb.ubl_vd, UINT64_MAX);
}
if (*config == NULL) {
vdev_dbgmsg(cb.ubl_vd, "failed to read label config");
}

View File

@ -251,9 +251,33 @@ vdev_mirror_map_init(zio_t *zio)
if (vd == NULL) {
dva_t *dva = zio->io_bp->blk_dva;
spa_t *spa = zio->io_spa;
dva_t dva_copy[SPA_DVAS_PER_BP];
mm = vdev_mirror_map_alloc(BP_GET_NDVAS(zio->io_bp), B_FALSE,
B_TRUE);
c = BP_GET_NDVAS(zio->io_bp);
/*
* If we do not trust the pool config, some DVAs might be
* invalid or point to vdevs that do not exist. We skip them.
*/
if (!spa_trust_config(spa)) {
ASSERT3U(zio->io_type, ==, ZIO_TYPE_READ);
int j = 0;
for (int i = 0; i < c; i++) {
if (zfs_dva_valid(spa, &dva[i], zio->io_bp))
dva_copy[j++] = dva[i];
}
if (j == 0) {
zio->io_vsd = NULL;
zio->io_error = ENXIO;
return (NULL);
}
if (j < c) {
dva = dva_copy;
c = j;
}
}
mm = vdev_mirror_map_alloc(c, B_FALSE, B_TRUE);
for (c = 0; c < mm->mm_children; c++) {
mc = &mm->mm_child[c];
@ -305,7 +329,10 @@ vdev_mirror_open(vdev_t *vd, uint64_t *asize, uint64_t *max_asize,
}
if (numerrors == vd->vdev_children) {
vd->vdev_stat.vs_aux = VDEV_AUX_NO_REPLICAS;
if (vdev_children_are_offline(vd))
vd->vdev_stat.vs_aux = VDEV_AUX_CHILDREN_OFFLINE;
else
vd->vdev_stat.vs_aux = VDEV_AUX_NO_REPLICAS;
return (lasterror);
}
@ -485,6 +512,13 @@ vdev_mirror_io_start(zio_t *zio)
mm = vdev_mirror_map_init(zio);
if (mm == NULL) {
ASSERT(!spa_trust_config(zio->io_spa));
ASSERT(zio->io_type == ZIO_TYPE_READ);
zio_execute(zio);
return;
}
if (zio->io_type == ZIO_TYPE_READ) {
if (zio->io_bp != NULL &&
(zio->io_flags & ZIO_FLAG_SCRUB) && !mm->mm_replacing) {
@ -558,6 +592,9 @@ vdev_mirror_io_done(zio_t *zio)
int good_copies = 0;
int unexpected_errors = 0;
if (mm == NULL)
return;
for (c = 0; c < mm->mm_children; c++) {
mc = &mm->mm_child[c];
@ -677,13 +714,19 @@ vdev_mirror_io_done(zio_t *zio)
static void
vdev_mirror_state_change(vdev_t *vd, int faulted, int degraded)
{
if (faulted == vd->vdev_children)
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_NO_REPLICAS);
else if (degraded + faulted != 0)
if (faulted == vd->vdev_children) {
if (vdev_children_are_offline(vd)) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_OFFLINE,
VDEV_AUX_CHILDREN_OFFLINE);
} else {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_NO_REPLICAS);
}
} else if (degraded + faulted != 0) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_DEGRADED, VDEV_AUX_NONE);
else
} else {
vdev_set_state(vd, B_FALSE, VDEV_STATE_HEALTHY, VDEV_AUX_NONE);
}
}
vdev_ops_t vdev_mirror_ops = {

View File

@ -37,6 +37,23 @@
* Virtual device vector for the pool's root vdev.
*/
static uint64_t
vdev_root_core_tvds(vdev_t *vd)
{
uint64_t tvds = 0;
for (uint64_t c = 0; c < vd->vdev_children; c++) {
vdev_t *cvd = vd->vdev_child[c];
if (!cvd->vdev_ishole && !cvd->vdev_islog &&
cvd->vdev_ops != &vdev_indirect_ops) {
tvds++;
}
}
return (tvds);
}
/*
* We should be able to tolerate one failure with absolutely no damage
* to our metadata. Two failures will take out space maps, a bunch of
@ -46,17 +63,28 @@
* probably fine. Adding bean counters during alloc/free can make this
* future guesswork more accurate.
*/
static int
too_many_errors(vdev_t *vd, int numerrors)
static boolean_t
too_many_errors(vdev_t *vd, uint64_t numerrors)
{
ASSERT3U(numerrors, <=, vd->vdev_children);
return (numerrors > 0);
uint64_t tvds;
if (numerrors == 0)
return (B_FALSE);
tvds = vdev_root_core_tvds(vd);
ASSERT3U(numerrors, <=, tvds);
if (numerrors == tvds)
return (B_TRUE);
return (numerrors > spa_missing_tvds_allowed(vd->vdev_spa));
}
static int
vdev_root_open(vdev_t *vd, uint64_t *asize, uint64_t *max_asize,
uint64_t *ashift)
{
spa_t *spa = vd->vdev_spa;
int lasterror = 0;
int numerrors = 0;
@ -76,6 +104,9 @@ vdev_root_open(vdev_t *vd, uint64_t *asize, uint64_t *max_asize,
}
}
if (spa_load_state(spa) != SPA_LOAD_NONE)
spa_set_missing_tvds(spa, numerrors);
if (too_many_errors(vd, numerrors)) {
vd->vdev_stat.vs_aux = VDEV_AUX_NO_REPLICAS;
return (lasterror);
@ -101,7 +132,7 @@ vdev_root_state_change(vdev_t *vd, int faulted, int degraded)
if (too_many_errors(vd, faulted)) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_CANT_OPEN,
VDEV_AUX_NO_REPLICAS);
} else if (degraded) {
} else if (degraded || faulted) {
vdev_set_state(vd, B_FALSE, VDEV_STATE_DEGRADED, VDEV_AUX_NONE);
} else {
vdev_set_state(vd, B_FALSE, VDEV_STATE_HEALTHY, VDEV_AUX_NONE);

View File

@ -878,6 +878,13 @@ zfs_blkptr_verify(spa_t *spa, const blkptr_t *bp)
}
}
/*
* Do not verify individual DVAs if the config is not trusted. This
* will be done once the zio is executed in vdev_mirror_map_alloc.
*/
if (!spa->spa_trust_config)
return;
/*
* Pool-specific checks.
*
@ -928,6 +935,36 @@ zfs_blkptr_verify(spa_t *spa, const blkptr_t *bp)
}
}
boolean_t
zfs_dva_valid(spa_t *spa, const dva_t *dva, const blkptr_t *bp)
{
uint64_t vdevid = DVA_GET_VDEV(dva);
if (vdevid >= spa->spa_root_vdev->vdev_children)
return (B_FALSE);
vdev_t *vd = spa->spa_root_vdev->vdev_child[vdevid];
if (vd == NULL)
return (B_FALSE);
if (vd->vdev_ops == &vdev_hole_ops)
return (B_FALSE);
if (vd->vdev_ops == &vdev_missing_ops) {
return (B_FALSE);
}
uint64_t offset = DVA_GET_OFFSET(dva);
uint64_t asize = DVA_GET_ASIZE(dva);
if (BP_IS_GANG(bp))
asize = vdev_psize_to_asize(vd, SPA_GANGBLOCKSIZE);
if (offset + asize > vd->vdev_asize)
return (B_FALSE);
return (B_TRUE);
}
zio_t *
zio_read(zio_t *pio, spa_t *spa, const blkptr_t *bp,
abd_t *data, uint64_t size, zio_done_func_t *done, void *private,
@ -3473,14 +3510,18 @@ zio_vdev_io_start(zio_t *zio)
}
ASSERT3P(zio->io_logical, !=, zio);
if (zio->io_type == ZIO_TYPE_WRITE && zio->io_vd->vdev_removing) {
if (zio->io_type == ZIO_TYPE_WRITE) {
ASSERT(spa->spa_trust_config);
/*
* Note: the code can handle other kinds of writes,
* but we don't expect them.
*/
ASSERT(zio->io_flags &
(ZIO_FLAG_PHYSICAL | ZIO_FLAG_SELF_HEAL |
ZIO_FLAG_RESILVER | ZIO_FLAG_INDUCE_DAMAGE));
if (zio->io_vd->vdev_removing) {
ASSERT(zio->io_flags &
(ZIO_FLAG_PHYSICAL | ZIO_FLAG_SELF_HEAL |
ZIO_FLAG_RESILVER | ZIO_FLAG_INDUCE_DAMAGE));
}
}
align = 1ULL << vd->vdev_top->vdev_ashift;

View File

@ -364,11 +364,22 @@ tests = ['zpool_import_001_pos', 'zpool_import_002_pos',
'zpool_import_012_pos', 'zpool_import_013_neg', 'zpool_import_014_pos',
'zpool_import_015_pos',
'zpool_import_features_001_pos', 'zpool_import_features_002_neg',
'zpool_import_features_003_pos','zpool_import_missing_001_pos',
'zpool_import_features_003_pos', 'zpool_import_missing_001_pos',
'zpool_import_missing_002_pos',
'zpool_import_rename_001_pos', 'zpool_import_all_001_pos',
'zpool_import_encrypted', 'zpool_import_encrypted_load',
'zpool_import_errata3']
'zpool_import_errata3',
'import_cache_device_added',
'import_cache_device_removed',
'import_cache_device_replaced',
'import_cache_mirror_attached',
'import_cache_mirror_detached',
'import_cache_shared_device',
'import_devices_missing',
'import_paths_changed',
'import_rewind_config_changed',
'import_rewind_device_replaced']
tags = ['functional', 'cli_root', 'zpool_import']
[tests/functional/cli_root/zpool_labelclear]

View File

@ -2,6 +2,17 @@ pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/cli_root/zpool_impo
dist_pkgdata_SCRIPTS = \
setup.ksh \
cleanup.ksh \
zpool_import.kshlib \
import_cache_device_added.ksh \
import_cache_device_removed.ksh \
import_cache_device_replaced.ksh \
import_cache_mirror_attached.ksh \
import_cache_mirror_detached.ksh \
import_cache_shared_device.ksh \
import_devices_missing.ksh \
import_paths_changed.ksh \
import_rewind_config_changed.ksh \
import_rewind_device_replaced.ksh \
zpool_import_001_pos.ksh \
zpool_import_002_pos.ksh \
zpool_import_003_pos.ksh \

View File

@ -0,0 +1,76 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should be importable using an outdated cachefile that is unaware
# that one or two top-level vdevs were added.
#
# STRATEGY:
# 1. Create a pool with some devices and an alternate cachefile.
# 2. Backup the cachefile.
# 3. Add a device/mirror/raid to the pool.
# 4. Export the pool.
# 5. Verify that we can import the pool using the backed-up cachefile.
#
verify_runnable "global"
log_onexit cleanup
function test_add_vdevs
{
typeset poolcreate="$1"
typeset addvdevs="$2"
typeset poolcheck="$3"
log_note "$0: pool '$poolcreate', add $addvdevs."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 $poolcreate
log_must cp $CPATH $CPATHBKP
log_must zpool add -f $TESTPOOL1 $addvdevs
log_must zpool export $TESTPOOL1
log_must zpool import -c $CPATHBKP $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolcheck"
# Cleanup
log_must zpool destroy $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP
log_note ""
}
test_add_vdevs "$VDEV0" "$VDEV1" "$VDEV0 $VDEV1"
test_add_vdevs "$VDEV0 $VDEV1" "$VDEV2" "$VDEV0 $VDEV1 $VDEV2"
test_add_vdevs "$VDEV0" "$VDEV1 $VDEV2" "$VDEV0 $VDEV1 $VDEV2"
test_add_vdevs "$VDEV0" "mirror $VDEV1 $VDEV2" \
"$VDEV0 mirror $VDEV1 $VDEV2"
test_add_vdevs "mirror $VDEV0 $VDEV1" "mirror $VDEV2 $VDEV3" \
"mirror $VDEV0 $VDEV1 mirror $VDEV2 $VDEV3"
test_add_vdevs "$VDEV0" "raidz $VDEV1 $VDEV2 $VDEV3" \
"$VDEV0 raidz $VDEV1 $VDEV2 $VDEV3"
test_add_vdevs "$VDEV0" "log $VDEV1" "$VDEV0 log $VDEV1"
test_add_vdevs "$VDEV0 log $VDEV1" "$VDEV2" "$VDEV0 $VDEV2 log $VDEV1"
test_add_vdevs "$VDEV0" "$VDEV1 log $VDEV2" "$VDEV0 $VDEV1 log $VDEV2"
log_pass "zpool import -c cachefile_unaware_of_add passed."

View File

@ -0,0 +1,145 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should be importable using an outdated cachefile that is unaware
# that one or more vdevs were removed.
#
# STRATEGY:
# 1. Create a pool with some devices and an alternate cachefile.
# 2. Backup the cachefile.
# 3. Remove device(s) from the pool and remove them.
# 4. (Optionally) Add device(s) to pool.
# 5. Export the pool.
# 6. Verify that we can import the pool using the backed-up cachefile.
#
verify_runnable "global"
function custom_cleanup
{
cleanup
}
log_onexit custom_cleanup
function test_remove_vdev
{
typeset poolcreate="$1"
typeset removevdev="$2"
typeset poolcheck="$3"
log_note "$0: pool '$poolcreate', remove $2."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 $poolcreate
log_must cp $CPATH $CPATHBKP
log_must zpool remove $TESTPOOL1 $removevdev
log_must wait_for_pool_config $TESTPOOL1 "$poolcheck"
log_must rm $removevdev
log_must zpool export $TESTPOOL1
log_must zpool import -c $CPATHBKP $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolcheck"
# Cleanup
log_must zpool destroy $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP
log_must mkfile $FILE_SIZE $removevdev
log_note ""
}
#
# We have to remove top-level non-log vdevs one by one, else there is a high
# chance pool will report busy and command will fail for the second vdev.
#
function test_remove_two_vdevs
{
log_note "$0."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 \
$VDEV0 $VDEV1 $VDEV2 $VDEV3 $VDEV4
log_must cp $CPATH $CPATHBKP
log_must zpool remove $TESTPOOL1 $VDEV4
log_must wait_for_pool_config $TESTPOOL1 \
"$VDEV0 $VDEV1 $VDEV2 $VDEV3"
log_must zpool remove $TESTPOOL1 $VDEV3
log_must wait_for_pool_config $TESTPOOL1 "$VDEV0 $VDEV1 $VDEV2"
log_must rm $VDEV3 $VDEV4
log_must zpool export $TESTPOOL1
log_must zpool import -c $CPATHBKP $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$VDEV0 $VDEV1 $VDEV2"
# Cleanup
log_must zpool destroy $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP
log_must mkfile $FILE_SIZE $VDEV3 $VDEV4
log_note ""
}
#
# We want to test the case where a whole created by a log device is filled
# by a regular device
#
function test_remove_log_then_add_vdev
{
log_note "$0."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 \
$VDEV0 $VDEV1 $VDEV2 log $VDEV3
log_must cp $CPATH $CPATHBKP
log_must zpool remove $TESTPOOL1 $VDEV1
log_must wait_for_pool_config $TESTPOOL1 "$VDEV0 $VDEV2 log $VDEV3"
log_must zpool remove $TESTPOOL1 $VDEV3
log_must check_pool_config $TESTPOOL1 "$VDEV0 $VDEV2"
log_must rm $VDEV1 $VDEV3
log_must zpool add $TESTPOOL1 $VDEV4
log_must zpool export $TESTPOOL1
log_must zpool import -c $CPATHBKP $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$VDEV0 $VDEV2 $VDEV4"
# Cleanup
log_must zpool destroy $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP
log_must mkfile $FILE_SIZE $VDEV1 $VDEV3
log_note ""
}
test_remove_vdev "$VDEV0 $VDEV1 $VDEV2" "$VDEV2" "$VDEV0 $VDEV1"
test_remove_vdev "$VDEV0 $VDEV1 $VDEV2" "$VDEV1" "$VDEV0 $VDEV2"
test_remove_vdev "$VDEV0 log $VDEV1" "$VDEV1" "$VDEV0"
test_remove_vdev "$VDEV0 log $VDEV1 $VDEV2" "$VDEV1 $VDEV2" "$VDEV0"
test_remove_vdev "$VDEV0 $VDEV1 $VDEV2 log $VDEV3" "$VDEV2" \
"$VDEV0 $VDEV1 log $VDEV3"
test_remove_two_vdevs
test_remove_log_then_add_vdev
log_pass "zpool import -c cachefile_unaware_of_remove passed."

View File

@ -0,0 +1,166 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should be importable using an outdated cachefile that is unaware
# of a zpool replace operation at different stages in time.
#
# STRATEGY:
# 1. Create a pool with some devices and an alternate cachefile.
# 2. Backup the cachefile.
# 3. Initiate device replacement, backup cachefile again and export pool.
# Special care must be taken so that resilvering doesn't complete
# before we exported the pool.
# 4. Verify that we can import the pool using the first cachefile backup.
# (Test 1. cachefile: pre-replace, pool: resilvering)
# 5. Wait for the resilvering to finish and export the pool.
# 6. Verify that we can import the pool using the first cachefile backup.
# (Test 2. cachefile: pre-replace, pool: post-replace)
# 7. Export the pool.
# 8. Verify that we can import the pool using the second cachefile backup.
# (Test 3. cachefile: resilvering, pool: post-replace)
#
# STRATEGY TO SLOW DOWN RESILVERING:
# 1. Reduce zfs_txg_timeout, which controls how long can we resilver for
# each sync.
# 2. Add data to pool
# 3. Re-import the pool so that data isn't cached
# 4. Use zinject to slow down device I/O
# 5. Trigger the resilvering
# 6. Use spa freeze to stop writing to the pool.
# 7. Clear zinject events (needed to export the pool)
# 8. Export the pool
#
verify_runnable "global"
ZFS_TXG_TIMEOUT=""
function custom_cleanup
{
# Revert zfs_txg_timeout to defaults
[[ -n ZFS_TXG_TIMEOUT ]] &&
log_must set_zfs_txg_timeout $ZFS_TXG_TIMEOUT
zinject -c all
cleanup
}
log_onexit custom_cleanup
function test_replacing_vdevs
{
typeset poolcreate="$1"
typeset replacevdev="$2"
typeset replaceby="$3"
typeset poolfinalstate="$4"
typeset zinjectdevices="$5"
typeset earlyremove="$6"
typeset writedata="$7"
log_note "$0: pool '$poolcreate', replace $replacevdev by $replaceby."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 $poolcreate
# Cachefile: pool in pre-replace state
log_must cp $CPATH $CPATHBKP
# Steps to insure resilvering happens very slowly.
log_must write_some_data $TESTPOOL1 $writedata
log_must zpool export $TESTPOOL1
log_must cp $CPATHBKP $CPATH
log_must zpool import -c $CPATH -o cachefile=$CPATH $TESTPOOL1
typeset device
for device in $zinjectdevices ; do
log_must zinject -d $device -D 200:1 $TESTPOOL1 > /dev/null
done
log_must zpool replace $TESTPOOL1 $replacevdev $replaceby
# Cachefile: pool in resilvering state
log_must cp $CPATH $CPATHBKP2
# We must disable zinject in order to export the pool, so we freeze
# it first to prevent writing out subsequent resilvering progress.
log_must zpool freeze $TESTPOOL1
# Confirm pool is still replacing
log_must pool_is_replacing $TESTPOOL1
log_must zinject -c all > /dev/null
log_must zpool export $TESTPOOL1
( $earlyremove ) && log_must rm $replacevdev
############################################################
# Test 1. Cachefile: pre-replace, pool: resilvering
############################################################
log_must cp $CPATHBKP $CPATH
log_must zpool import -c $CPATH $TESTPOOL1
# Wait for resilvering to finish
log_must wait_for_pool_config $TESTPOOL1 "$poolfinalstate"
log_must zpool export $TESTPOOL1
( ! $earlyremove ) && log_must rm $replacevdev
############################################################
# Test 2. Cachefile: pre-replace, pool: post-replace
############################################################
log_must zpool import -c $CPATHBKP $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolfinalstate"
log_must zpool export $TESTPOOL1
############################################################
# Test 3. Cachefile: resilvering, pool: post-replace
############################################################
log_must zpool import -c $CPATHBKP2 $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolfinalstate"
# Cleanup
log_must zpool destroy $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP $CPATHBKP2
log_must mkfile $FILE_SIZE $replacevdev
log_note ""
}
# We set zfs_txg_timeout to 1 to reduce resilvering time at each sync.
ZFS_TXG_TIMEOUT=$(get_zfs_txg_timeout)
set_zfs_txg_timeout 1
test_replacing_vdevs "$VDEV0 $VDEV1" \
"$VDEV1" "$VDEV2" \
"$VDEV0 $VDEV2" \
"$VDEV0 $VDEV1" \
false 20
test_replacing_vdevs "mirror $VDEV0 $VDEV1" \
"$VDEV1" "$VDEV2" \
"mirror $VDEV0 $VDEV2" \
"$VDEV0 $VDEV1" \
true 10
test_replacing_vdevs "raidz $VDEV0 $VDEV1 $VDEV2" \
"$VDEV1" "$VDEV3" \
"raidz $VDEV0 $VDEV3 $VDEV2" \
"$VDEV0 $VDEV1 $VDEV2" \
true 20
set_zfs_txg_timeout $ZFS_TXG_TIMEOUT
log_pass "zpool import -c cachefile_unaware_of_replace passed."

View File

@ -0,0 +1,72 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should be importable using an outdated cachefile that misses a
# mirror that was attached.
#
# STRATEGY:
# 1. Create a pool with some devices and an alternate cachefile.
# 2. Backup the cachefile.
# 3. Attach a mirror to one of the devices in the pool.
# 4. Export the pool.
# 5. Verify that we can import the pool using the backed-up cachefile.
#
verify_runnable "global"
log_onexit cleanup
function test_attach_vdev
{
typeset poolcreate="$1"
typeset attachto="$2"
typeset attachvdev="$3"
typeset poolcheck="$4"
log_note "$0: pool '$poolcreate', attach $attachvdev to $attachto."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 $poolcreate
log_must cp $CPATH $CPATHBKP
log_must zpool attach $TESTPOOL1 $attachto $attachvdev
log_must zpool export $TESTPOOL1
log_must zpool import -c $CPATHBKP $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolcheck"
# Cleanup
log_must zpool destroy $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP
log_note ""
}
test_attach_vdev "$VDEV0" "$VDEV0" "$VDEV4" "mirror $VDEV0 $VDEV4"
test_attach_vdev "$VDEV0 $VDEV1" "$VDEV1" "$VDEV4" \
"$VDEV0 mirror $VDEV1 $VDEV4"
test_attach_vdev "mirror $VDEV0 $VDEV1" "$VDEV0" "$VDEV4" \
"mirror $VDEV0 $VDEV1 $VDEV4"
test_attach_vdev "$VDEV0 log $VDEV1" "$VDEV1" "$VDEV4" \
"$VDEV0 log mirror $VDEV1 $VDEV4"
log_pass "zpool import -c cachefile_unaware_of_attach passed."

View File

@ -0,0 +1,70 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should be importable using an outdated cachefile that is unaware
# that a mirror was detached.
#
# STRATEGY:
# 1. Create a pool with some devices mirrored and an alternate cachefile.
# 2. Backup the cachefile.
# 3. Detach a mirror from the pool.
# 4. Export the pool.
# 5. Verify that we can import the pool using the backed-up cachefile.
#
verify_runnable "global"
log_onexit cleanup
function test_detach_vdev
{
typeset poolcreate="$1"
typeset poolcheck="$2"
log_note "$0: pool '$poolcreate', detach $VDEV4."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 $poolcreate
log_must cp $CPATH $CPATHBKP
log_must zpool detach $TESTPOOL1 $VDEV4
log_must rm -f $VDEV4
log_must zpool export $TESTPOOL1
log_must zpool import -c $CPATHBKP $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolcheck"
# Cleanup
log_must zpool destroy $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP
log_must mkfile $FILE_SIZE $VDEV4
log_note ""
}
test_detach_vdev "mirror $VDEV0 $VDEV4" "$VDEV0"
test_detach_vdev "mirror $VDEV0 $VDEV4 mirror $VDEV1 $VDEV2" \
"$VDEV0 mirror $VDEV1 $VDEV2"
test_detach_vdev "mirror $VDEV0 $VDEV1 $VDEV4" "mirror $VDEV0 $VDEV1"
test_detach_vdev "$VDEV0 log mirror $VDEV1 $VDEV4" "$VDEV0 log $VDEV1"
log_pass "zpool import -c cachefile_unaware_of_detach passed."

View File

@ -0,0 +1,113 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should not try to write to a device that doesn't belong to it
# anymore, even if the device is in its cachefile.
#
# STRATEGY:
# 1. Create pool1 with some devices and an alternate cachefile.
# 2. Backup the cachefile.
# 3. Export pool1.
# 4. Create pool2 using a device that belongs to pool1.
# 5. Export pool2.
# 6. Compute checksum of the shared device.
# 7. Import pool1 and write some data to it.
# 8. Verify that the checksum of the shared device hasn't changed.
#
verify_runnable "global"
function custom_cleanup
{
destroy_pool $TESTPOOL2
cleanup
}
log_onexit custom_cleanup
function dev_checksum
{
typeset dev="$1"
typeset checksum
log_note "Compute checksum of '$dev'"
checksum=$(md5sum $dev)
if [[ $? -ne 0 ]]; then
log_fail "Failed to compute checksum of '$dev'"
return 1
fi
echo "$checksum"
return 0
}
function test_shared_device
{
typeset pool1="$1"
typeset pool2="$2"
typeset sharedvdev="$3"
typeset importflags="${4:-}"
log_note "$0: pool1 '$pool1', pool2 '$pool2' takes $sharedvdev."
log_must zpool create -o cachefile=$CPATH $TESTPOOL1 $pool1
log_must cp $CPATH $CPATHBKP
log_must zpool export $TESTPOOL1
log_must zpool create -f $TESTPOOL2 $pool2
log_must zpool export $TESTPOOL2
typeset checksum1=$(dev_checksum $sharedvdev)
log_must zpool import -c $CPATHBKP $importflags $TESTPOOL1
log_must write_some_data $TESTPOOL1 2
log_must zpool destroy $TESTPOOL1
typeset checksum2=$(dev_checksum $sharedvdev)
if [[ $checksum1 == $checksum2 ]]; then
log_pos "Device hasn't been modified by original pool"
else
log_fail "Device has been modified by original pool." \
"Checksum mismatch: $checksum1 != $checksum2."
fi
# Cleanup
log_must zpool import -d $DEVICE_DIR $TESTPOOL2
log_must zpool destroy $TESTPOOL2
log_must rm -f $CPATH $CPATHBKP
log_note ""
}
test_shared_device "mirror $VDEV0 $VDEV1" "mirror $VDEV1 $VDEV2" "$VDEV1"
test_shared_device "mirror $VDEV0 $VDEV1 $VDEV2" "mirror $VDEV2 $VDEV3" \
"$VDEV2"
test_shared_device "raidz $VDEV0 $VDEV1 $VDEV2" "$VDEV2" "$VDEV2"
test_shared_device "$VDEV0 log $VDEV1" "$VDEV2 log $VDEV1" "$VDEV1" "-m"
log_pass "Pool doesn't write to a device it doesn't own anymore."

View File

@ -0,0 +1,122 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should be importable when up to 2 top-level devices are missing.
#
# STRATEGY:
# 1. Create a pool.
# 2. Write some data to the pool and checksum it.
# 3. Add one or more devices.
# 4. Write more data to the pool and checksum it.
# 5. Export the pool.
# 6. Move added devices out of the devices directory.
# 7. Import the pool with missing devices.
# 8. Verify that the first batch of data is intact.
# 9. Verify that accessing the second batch of data doesn't suspend pool.
# 10. Export the pool, move back missing devices, Re-import the pool.
# 11. Verify that all the data is intact.
#
verify_runnable "global"
function custom_cleanup
{
log_must set_spa_load_verify_metadata 1
log_must set_spa_load_verify_data 1
log_must set_zfs_max_missing_tvds 0
log_must rm -rf $BACKUP_DEVICE_DIR
# Highly damaged pools may fail to be destroyed, so we export them.
poolexists $TESTPOOL1 && log_must zpool export $TESTPOOL1
cleanup
}
log_onexit custom_cleanup
function test_devices_missing
{
typeset poolcreate="$1"
typeset addvdevs="$2"
typeset missingvdevs="$3"
typeset -i missingtvds="$4"
log_note "$0: pool '$poolcreate', adding $addvdevs, then" \
"moving away $missingvdevs."
log_must zpool create $TESTPOOL1 $poolcreate
log_must generate_data $TESTPOOL1 $MD5FILE "first"
log_must zpool add $TESTPOOL1 $addvdevs
log_must generate_data $TESTPOOL1 $MD5FILE2 "second"
log_must zpool export $TESTPOOL1
log_must mv $missingvdevs $BACKUP_DEVICE_DIR
# Tell zfs that it is ok to import a pool with missing top-level vdevs
log_must set_zfs_max_missing_tvds $missingtvds
# Missing devices means that data or metadata may be corrupted.
(( missingtvds > 1 )) && log_must set_spa_load_verify_metadata 0
log_must set_spa_load_verify_data 0
log_must zpool import -o readonly=on -d $DEVICE_DIR $TESTPOOL1
log_must verify_data_md5sums $MD5FILE
log_note "Try reading second batch of data, make sure pool doesn't" \
"get suspended."
verify_data_md5sums $MD5FILE >/dev/null 2>&1
log_must zpool export $TESTPOOL1
typeset newpaths=$(echo "$missingvdevs" | \
sed "s:$DEVICE_DIR:$BACKUP_DEVICE_DIR:g")
log_must mv $newpaths $DEVICE_DIR
log_must set_spa_load_verify_metadata 1
log_must set_spa_load_verify_data 1
log_must set_zfs_max_missing_tvds 0
log_must zpool import -d $DEVICE_DIR $TESTPOOL1
log_must verify_data_md5sums $MD5FILE
log_must verify_data_md5sums $MD5FILE2
# Cleanup
log_must zpool destroy $TESTPOOL1
log_note ""
}
log_must mkdir -p $BACKUP_DEVICE_DIR
test_devices_missing "$VDEV0" "$VDEV1" "$VDEV1" 1
test_devices_missing "$VDEV0" "$VDEV1 $VDEV2" "$VDEV1" 1
test_devices_missing "mirror $VDEV0 $VDEV1" "mirror $VDEV2 $VDEV3" \
"$VDEV2 $VDEV3" 1
test_devices_missing "$VDEV0 log $VDEV1" "$VDEV2" "$VDEV2" 1
#
# Note that we are testing for 2 non-consecutive missing devices.
# Missing consecutive devices results in missing metadata. Because of
# Missing metadata can cause the root dataset to fail to mount.
#
test_devices_missing "$VDEV0" "$VDEV1 $VDEV2 $VDEV3" "$VDEV1 $VDEV3" 2
log_pass "zpool import succeeded with missing devices."

View File

@ -0,0 +1,98 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# A pool should be importable even if device paths have changed.
#
# STRATEGY:
# 1. Create a pool.
# 2. Export the pool.
# 3. Change the paths of some of the devices.
# 4. Verify that we can import the pool in a healthy state.
#
verify_runnable "global"
log_onexit cleanup
function test_new_paths
{
typeset poolcreate="$1"
typeset pathstochange="$2"
log_note "$0: pool '$poolcreate', changing paths of $pathstochange."
log_must zpool create $TESTPOOL1 $poolcreate
log_must zpool export $TESTPOOL1
for dev in $pathstochange; do
log_must mv $dev "${dev}_new"
done
log_must zpool import -d $DEVICE_DIR $TESTPOOL1
log_must check_pool_healthy $TESTPOOL1
# Cleanup
log_must zpool destroy $TESTPOOL1
for dev in $pathstochange; do
log_must mv "${dev}_new" $dev
done
log_note ""
}
function test_swap_paths
{
typeset poolcreate="$1"
typeset pathtoswap1="$2"
typeset pathtoswap2="$3"
log_note "$0: pool '$poolcreate', swapping paths of $pathtoswap1" \
"and $pathtoswap2."
log_must zpool create $TESTPOOL1 $poolcreate
log_must zpool export $TESTPOOL1
log_must mv $pathtoswap2 "$pathtoswap2.tmp"
log_must mv $pathtoswap1 "$pathtoswap2"
log_must mv "$pathtoswap2.tmp" $pathtoswap1
log_must zpool import -d $DEVICE_DIR $TESTPOOL1
log_must check_pool_healthy $TESTPOOL1
# Cleanup
log_must zpool destroy $TESTPOOL1
log_note ""
}
test_new_paths "$VDEV0 $VDEV1" "$VDEV0 $VDEV1"
test_new_paths "mirror $VDEV0 $VDEV1" "$VDEV0 $VDEV1"
test_new_paths "$VDEV0 log $VDEV1" "$VDEV1"
test_new_paths "raidz $VDEV0 $VDEV1 $VDEV2" "$VDEV1"
test_swap_paths "$VDEV0 $VDEV1" "$VDEV0" "$VDEV1"
test_swap_paths "raidz $VDEV0 $VDEV1 $VDEV2" "$VDEV0" "$VDEV1"
test_swap_paths "mirror $VDEV0 $VDEV1 mirror $VDEV2 $VDEV3" \
"$VDEV0" "$VDEV2"
log_pass "zpool import succeeded after changing device paths."

View File

@ -0,0 +1,239 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# It should be possible to rewind a pool beyond a configuration change.
#
# STRATEGY:
# 1. Create a pool.
# 2. Generate files and remember their md5sum.
# 3. Note last synced txg.
# 4. Take a snapshot to make sure old blocks are not overwritten.
# 5. Perform zpool add/attach/detach/remove operation.
# 6. Change device paths if requested and re-import pool.
# 7. Overwrite the files.
# 8. Export the pool.
# 9. Verify that we can rewind the pool to the noted txg.
# 10. Verify that the files are readable and retain their old data.
#
# DISCLAIMER:
# This test can fail since nothing guarantees that old MOS blocks aren't
# overwritten. Snapshots protect datasets and data files but not the MOS.
# sync_some_data_a_few_times interleaves file data and MOS data for a few
# txgs, thus increasing the odds that some txgs will have their MOS data
# left untouched.
#
verify_runnable "global"
function custom_cleanup
{
set_vdev_validate_skip 0
cleanup
}
log_onexit custom_cleanup
function test_common
{
typeset poolcreate="$1"
typeset addvdevs="$2"
typeset attachargs="${3:-}"
typeset detachvdev="${4:-}"
typeset removevdev="${5:-}"
typeset finalpool="${6:-}"
typeset poolcheck="$poolcreate"
log_must zpool create $TESTPOOL1 $poolcreate
log_must generate_data $TESTPOOL1 $MD5FILE
# syncing a few times while writing new data increases the odds that MOS
# metadata for some of the txgs will survive
log_must sync_some_data_a_few_times $TESTPOOL1
typeset txg
txg=$(get_last_txg_synced $TESTPOOL1)
log_must zfs snapshot -r $TESTPOOL1@snap1
#
# Perform config change operations
#
if [[ -n $addvdev ]]; then
log_must zpool add -f $TESTPOOL1 $addvdev
fi
if [[ -n $attachargs ]]; then
log_must zpool attach $TESTPOOL1 $attachargs
fi
if [[ -n $detachvdev ]]; then
log_must zpool detach $TESTPOOL1 $detachvdev
fi
if [[ -n $removevdev ]]; then
[[ -z $finalpool ]] &&
log_fail "Must provide final pool status!"
log_must zpool remove $TESTPOOL1 $removevdev
log_must wait_for_pool_config $TESTPOOL1 "$finalpool"
fi
if [[ -n $pathstochange ]]; then
#
# Change device paths and re-import pool to update labels
#
zpool export $TESTPOOL1
for dev in $pathstochange; do
log_must mv $dev "${dev}_new"
poolcheck=$(echo "$poolcheck" | \
sed "s:$dev:${dev}_new:g")
done
zpool import -d $DEVICE_DIR $TESTPOOL1
fi
log_must overwrite_data $TESTPOOL1 ""
log_must zpool export $TESTPOOL1
log_must zpool import -d $DEVICE_DIR -T $txg $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolcheck"
log_must verify_data_md5sums $MD5FILE
# Cleanup
log_must zpool destroy $TESTPOOL1
if [[ -n $pathstochange ]]; then
for dev in $pathstochange; do
log_must mv "${dev}_new" $dev
done
fi
# Fast way to clear vdev labels
log_must zpool create -f $TESTPOOL2 $VDEV0 $VDEV1 $VDEV2 $VDEV3 $VDEV4
log_must zpool destroy $TESTPOOL2
log_note ""
}
function test_add_vdevs
{
typeset poolcreate="$1"
typeset addvdevs="$2"
log_note "$0: pool '$poolcreate', add $addvdevs."
test_common "$poolcreate" "$addvdevs"
}
function test_attach_vdev
{
typeset poolcreate="$1"
typeset attachto="$2"
typeset attachvdev="$3"
log_note "$0: pool '$poolcreate', attach $attachvdev to $attachto."
test_common "$poolcreate" "" "$attachto $attachvdev"
}
function test_detach_vdev
{
typeset poolcreate="$1"
typeset detachvdev="$2"
log_note "$0: pool '$poolcreate', detach $detachvdev."
test_common "$poolcreate" "" "" "$detachvdev"
}
function test_attach_detach_vdev
{
typeset poolcreate="$1"
typeset attachto="$2"
typeset attachvdev="$3"
typeset detachvdev="$4"
log_note "$0: pool '$poolcreate', attach $attachvdev to $attachto," \
"then detach $detachvdev."
test_common "$poolcreate" "" "$attachto $attachvdev" "$detachvdev"
}
function test_remove_vdev
{
typeset poolcreate="$1"
typeset removevdev="$2"
typeset finalpool="$3"
log_note "$0: pool '$poolcreate', remove $removevdev."
test_common "$poolcreate" "" "" "" "$removevdev" "$finalpool"
}
# Record txg history
is_linux && log_must set_tunable32 zfs_txg_history 100
# Make the devices bigger to reduce chances of overwriting MOS metadata.
increase_device_sizes $(( FILE_SIZE * 4 ))
# Part of the rewind test is to see how it reacts to path changes
typeset pathstochange="$VDEV0 $VDEV1 $VDEV2 $VDEV3"
log_note " == test rewind after device addition == "
test_add_vdevs "$VDEV0" "$VDEV1"
test_add_vdevs "$VDEV0 $VDEV1" "$VDEV2"
test_add_vdevs "$VDEV0" "$VDEV1 $VDEV2"
test_add_vdevs "mirror $VDEV0 $VDEV1" "mirror $VDEV2 $VDEV3"
test_add_vdevs "$VDEV0" "raidz $VDEV1 $VDEV2 $VDEV3"
test_add_vdevs "$VDEV0" "log $VDEV1"
test_add_vdevs "$VDEV0 log $VDEV1" "$VDEV2"
log_note " == test rewind after device attach == "
test_attach_vdev "$VDEV0" "$VDEV0" "$VDEV1"
test_attach_vdev "mirror $VDEV0 $VDEV1" "$VDEV0" "$VDEV2"
test_attach_vdev "$VDEV0 $VDEV1" "$VDEV0" "$VDEV2"
log_note " == test rewind after device removal == "
# Once we remove a device it will be overlooked in the device scan, so we must
# preserve its original path
pathstochange="$VDEV0 $VDEV2"
test_remove_vdev "$VDEV0 $VDEV1 $VDEV2" "$VDEV1" "$VDEV0 $VDEV2"
#
# Path change and detach are incompatible. Detach changes the guid of the vdev
# so we have no direct way to link the new path to an existing vdev.
#
pathstochange=""
log_note " == test rewind after device detach == "
test_detach_vdev "mirror $VDEV0 $VDEV1" "$VDEV1"
test_detach_vdev "mirror $VDEV0 $VDEV1 mirror $VDEV2 $VDEV3" "$VDEV1"
test_detach_vdev "$VDEV0 log mirror $VDEV1 $VDEV2" "$VDEV2"
log_note " == test rewind after device attach followed by device detach == "
#
# We need to disable vdev validation since once we detach VDEV1, VDEV0 will
# inherit the mirror tvd's guid and lose its original guid.
#
set_vdev_validate_skip 1
test_attach_detach_vdev "$VDEV0" "$VDEV0" "$VDEV1" "$VDEV1"
set_vdev_validate_skip 0
log_pass "zpool import rewind after configuration change passed."

View File

@ -0,0 +1,186 @@
#!/usr/bin/ksh -p
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.kshlib
#
# DESCRIPTION:
# It should be possible to rewind a pool beyond a device replacement.
#
# STRATEGY:
# 1. Create a pool.
# 2. Generate files and remember their md5sum.
# 3. Sync a few times and note last synced txg.
# 4. Take a snapshot to make sure old blocks are not overwritten.
# 5. Initiate device replacement and export the pool. Special care must
# be taken so that resilvering doesn't complete before the export.
# 6. Test 1: Rewind pool to noted txg and then verify data checksums.
# Import it read-only so that we do not overwrite blocks in later txgs.
# 7. Re-import pool at latest txg and let the replacement finish.
# 8. Export the pool an remove the new device - we shouldn't need it.
# 9. Test 2: Rewind pool to noted txg and then verify data checksums.
#
# STRATEGY TO SLOW DOWN RESILVERING:
# 1. Reduce zfs_txg_timeout, which controls how long can we resilver for
# each sync.
# 2. Add data to pool
# 3. Re-import the pool so that data isn't cached
# 4. Use zinject to slow down device I/O
# 5. Trigger the resilvering
# 6. Use spa freeze to stop writing to the pool.
# 7. Clear zinject events (needed to export the pool)
# 8. Export the pool
#
# DISCLAIMER:
# This test can fail since nothing guarantees that old MOS blocks aren't
# overwritten. Snapshots protect datasets and data files but not the MOS.
# sync_some_data_a_few_times interleaves file data and MOS data for a few
# txgs, thus increasing the odds that some txgs will have their MOS data
# left untouched.
#
verify_runnable "global"
ZFS_TXG_TIMEOUT=""
function custom_cleanup
{
# Revert zfs_txg_timeout to defaults
[[ -n ZFS_TXG_TIMEOUT ]] &&
log_must set_zfs_txg_timeout $ZFS_TXG_TIMEOUT
log_must rm -rf $BACKUP_DEVICE_DIR
zinject -c all
cleanup
}
log_onexit custom_cleanup
function test_replace_vdev
{
typeset poolcreate="$1"
typeset replacevdev="$2"
typeset replaceby="$3"
typeset poolfinalstate="$4"
typeset zinjectdevices="$5"
typeset writedata="$6"
log_note "$0: pool '$poolcreate', replace $replacevdev by $replaceby."
log_must zpool create $TESTPOOL1 $poolcreate
# generate data and checksum it
log_must generate_data $TESTPOOL1 $MD5FILE
# add more data so that resilver takes longer
log_must write_some_data $TESTPOOL1 $writedata
# Syncing a few times while writing new data increases the odds that
# MOS metadata for some of the txgs will survive.
log_must sync_some_data_a_few_times $TESTPOOL1
typeset txg
txg=$(get_last_txg_synced $TESTPOOL1)
log_must zfs snapshot -r $TESTPOOL1@snap1
# This should not free original data.
log_must overwrite_data $TESTPOOL1 ""
# Steps to insure resilvering happens very slowly.
log_must zpool export $TESTPOOL1
log_must zpool import -d $DEVICE_DIR $TESTPOOL1
typeset device
for device in $zinjectdevices ; do
log_must zinject -d $device -D 200:1 $TESTPOOL1 > /dev/null
done
log_must zpool replace $TESTPOOL1 $replacevdev $replaceby
# We must disable zinject in order to export the pool, so we freeze
# it first to prevent writing out subsequent resilvering progress.
log_must zpool freeze $TESTPOOL1
# Confirm pool is still replacing
log_must pool_is_replacing $TESTPOOL1
log_must zinject -c all > /dev/null
log_must zpool export $TESTPOOL1
############################################################
# Test 1: rewind while device is resilvering.
# Import read only to avoid overwriting more recent blocks.
############################################################
log_must zpool import -d $DEVICE_DIR -o readonly=on -T $txg $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolcreate"
log_must verify_data_md5sums $MD5FILE
log_must zpool export $TESTPOOL1
# Import pool at latest txg to finish the resilvering
log_must zpool import -d $DEVICE_DIR $TESTPOOL1
log_must overwrite_data $TESTPOOL1 ""
log_must wait_for_pool_config $TESTPOOL1 "$poolfinalstate"
log_must zpool export $TESTPOOL1
# Move out the new device
log_must mv $replaceby $BACKUP_DEVICE_DIR/
############################################################
# Test 2: rewind after device has been replaced.
# Import read-write since we won't need the pool anymore.
############################################################
log_must zpool import -d $DEVICE_DIR -T $txg $TESTPOOL1
log_must check_pool_config $TESTPOOL1 "$poolcreate"
log_must verify_data_md5sums $MD5FILE
# Cleanup
log_must zpool destroy $TESTPOOL1
# Restore the device we moved out
log_must mv "$BACKUP_DEVICE_DIR/$(basename $replaceby)" $DEVICE_DIR/
# Fast way to clear vdev labels
log_must zpool create -f $TESTPOOL2 $VDEV0 $VDEV1 $VDEV2 $VDEV3 $VDEV4
log_must zpool destroy $TESTPOOL2
log_note ""
}
# Record txg history
is_linux && log_must set_tunable32 zfs_txg_history 100
log_must mkdir -p $BACKUP_DEVICE_DIR
# Make the devices bigger to reduce chances of overwriting MOS metadata.
increase_device_sizes $(( FILE_SIZE * 4 ))
# We set zfs_txg_timeout to 1 to reduce resilvering time at each sync.
ZFS_TXG_TIMEOUT=$(get_zfs_txg_timeout)
set_zfs_txg_timeout 1
test_replace_vdev "$VDEV0 $VDEV1" \
"$VDEV1" "$VDEV2" \
"$VDEV0 $VDEV2" \
"$VDEV0 $VDEV1" 15
test_replace_vdev "mirror $VDEV0 $VDEV1" \
"$VDEV1" "$VDEV2" \
"mirror $VDEV0 $VDEV2" \
"$VDEV0 $VDEV1" 10
test_replace_vdev "raidz $VDEV0 $VDEV1 $VDEV2" \
"$VDEV1" "$VDEV3" \
"raidz $VDEV0 $VDEV3 $VDEV2" \
"$VDEV0 $VDEV1 $VDEV2" 10
set_zfs_txg_timeout $ZFS_TXG_TIMEOUT
log_pass "zpool import rewind after device replacement passed."

View File

@ -69,26 +69,14 @@ log_must zfs create $TESTPOOL/$TESTFS
log_must zfs set mountpoint=$TESTDIR $TESTPOOL/$TESTFS
DISK2="$(echo $DISKS | nawk '{print $2}')"
if is_mpath_device $DISK2; then
echo "y" | newfs -v $DEV_DSKDIR/$DISK2 >/dev/null 2>&1
(( $? != 0 )) &&
log_untested "Unable to setup a $NEWFS_DEFAULT_FS file system"
echo "y" | newfs -v $DEV_DSKDIR/$DISK2 >/dev/null 2>&1
(( $? != 0 )) &&
log_untested "Unable to setup a $NEWFS_DEFAULT_FS file system"
[[ ! -d $DEVICE_DIR ]] && \
log_must mkdir -p $DEVICE_DIR
[[ ! -d $DEVICE_DIR ]] && \
log_must mkdir -p $DEVICE_DIR
log_must mount $DEV_DSKDIR/$DISK2 $DEVICE_DIR
else
log_must set_partition 0 "" $FS_SIZE $ZFS_DISK2
echo "y" | newfs -v $DEV_DSKDIR/$ZFSSIDE_DISK2 >/dev/null 2>&1
(( $? != 0 )) &&
log_untested "Unable to setup a $NEWFS_DEFAULT_FS file system"
[[ ! -d $DEVICE_DIR ]] && \
log_must mkdir -p $DEVICE_DIR
log_must mount $DEV_DSKDIR/$ZFSSIDE_DISK2 $DEVICE_DIR
fi
log_must mount $DEV_DSKDIR/$DISK2 $DEVICE_DIR
i=0
while (( i < $MAX_NUM )); do

View File

@ -25,7 +25,7 @@
#
#
# Copyright (c) 2012, 2015 by Delphix. All rights reserved.
# Copyright (c) 2012, 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/include/libtest.shlib
@ -57,10 +57,8 @@ case "${#disk_array[*]}" in
if ( is_mpath_device $ZFS_DISK1 ) && [[ -z $(echo $ZFS_DISK1 | awk 'substr($1,18,1)\
~ /^[[:digit:]]+$/') ]] || ( is_real_device $ZFS_DISK1 ); then
ZFSSIDE_DISK1=${ZFS_DISK1}1
ZFSSIDE_DISK2=${ZFS_DISK2}2
elif ( is_mpath_device $ZFS_DISK1 || is_loop_device $ZFS_DISK1 ); then
ZFSSIDE_DISK1=${ZFS_DISK1}p1
ZFSSIDE_DISK2=${ZFS_DISK2}p2
else
log_fail "$ZFS_DISK1 not supported for partitioning."
fi
@ -71,7 +69,6 @@ case "${#disk_array[*]}" in
ZFS_DISK1=${disk_array[0]}
ZFSSIDE_DISK1=${ZFS_DISK1}s0
ZFS_DISK2=${disk_array[0]}
ZFSSIDE_DISK2=${ZFS_DISK2}s1
fi
;;
*)
@ -96,14 +93,6 @@ case "${#disk_array[*]}" in
log_fail "$ZFS_DISK1 not supported for partitioning."
fi
ZFS_DISK2=${disk_array[1]}
if ( is_mpath_device $ZFS_DISK2 ) && [[ -z $(echo $ZFS_DISK2 | awk 'substr($1,18,1)\
~ /^[[:digit:]]+$/') ]] || ( is_real_device $ZFS_DISK2 ); then
ZFSSIDE_DISK2=${ZFS_DISK2}1
elif ( is_mpath_device $ZFS_DISK2 || is_loop_device $ZFS_DISK2 ); then
ZFSSIDE_DISK2=${ZFS_DISK2}p1
else
log_fail "$ZFS_DISK2 not supported for partitioning."
fi
else
export DEV_DSKDIR="/dev"
PRIMARY_SLICE=2
@ -111,15 +100,14 @@ case "${#disk_array[*]}" in
ZFS_DISK1=${disk_array[0]}
ZFSSIDE_DISK1=${ZFS_DISK1}s0
ZFS_DISK2=${disk_array[1]}
ZFSSIDE_DISK2=${ZFS_DISK2}s0
fi
;;
esac
export DISK_COUNT ZFS_DISK1 ZFSSIDE_DISK1 ZFS_DISK2 ZFSSIDE_DISK2
export DISK_COUNT ZFS_DISK1 ZFSSIDE_DISK1 ZFS_DISK2
export FS_SIZE="$((($MINVDEVSIZE / (1024 * 1024)) * 16))m"
export FILE_SIZE="$(($MINVDEVSIZE / 2))"
export FS_SIZE="$((($MINVDEVSIZE / (1024 * 1024)) * 32))m"
export FILE_SIZE="$((MINVDEVSIZE))"
export SLICE_SIZE="$((($MINVDEVSIZE / (1024 * 1024)) * 2))m"
export MAX_NUM=5
export GROUP_NUM=3
@ -129,6 +117,12 @@ export DEVICE_FILE=disk
export DEVICE_ARCHIVE=archive_import-test
export MYTESTFILE=$STF_SUITE/include/libtest.shlib
export CPATH=/var/tmp/cachefile.$$
export CPATHBKP=/var/tmp/cachefile.$$.bkp
export CPATHBKP2=/var/tmp/cachefile.$$.bkp2
export MD5FILE=/var/tmp/md5sums.$$
export MD5FILE2=/var/tmp/md5sums.$$.2
typeset -i num=0
while (( num < $GROUP_NUM )); do
DEVICE_FILES="$DEVICE_FILES ${DEVICE_DIR}/${DEVICE_FILE}$num"

View File

@ -0,0 +1,376 @@
#!/usr/bin/ksh
#
# This file and its contents are supplied under the terms of the
# Common Development and Distribution License ("CDDL"), version 1.0.
# You may only use this file in accordance with the terms of version
# 1.0 of the CDDL.
#
# A full copy of the text of the CDDL should have accompanied this
# source. A copy of the CDDL is also available via the Internet at
# http://www.illumos.org/license/CDDL.
#
#
# Copyright (c) 2016 by Delphix. All rights reserved.
#
. $STF_SUITE/include/libtest.shlib
. $STF_SUITE/tests/functional/cli_root/zpool_import/zpool_import.cfg
#
# Prototype cleanup function for zpool_import tests.
#
function cleanup
{
destroy_pool $TESTPOOL1
log_must rm -f $CPATH $CPATHBKP $CPATHBKP2 $MD5FILE $MD5FILE2
log_must rm -rf $DEVICE_DIR/*
typeset i=0
while (( i < $MAX_NUM )); do
log_must mkfile $FILE_SIZE ${DEVICE_DIR}/${DEVICE_FILE}$i
((i += 1))
done
is_linux && set_tunable32 "zfs_txg_history" 0
}
#
# Write a bit of data and sync several times.
# This function is intended to be used by zpool rewind tests.
#
function sync_some_data_a_few_times
{
typeset pool=$1
typeset -i a_few_times=${2:-10}
typeset file="/$pool/tmpfile"
for i in {0..$a_few_times}; do
dd if=/dev/urandom of=${file}_$i bs=128k count=10
sync_pool "$pool"
done
return 0
}
#
# Just write a moderate amount of data to the pool.
#
function write_some_data
{
typeset pool=$1
typeset files10mb=${2:-10}
typeset ds="$pool/fillerds"
zfs create $ds
[[ $? -ne 0 ]] && return 1
# Create 100 MB of data
typeset file="/$ds/fillerfile"
for i in {1..$files10mb}; do
dd if=/dev/urandom of=$file.$i bs=128k count=80
[[ $? -ne 0 ]] && return 1
done
return 0
}
#
# Create/overwrite a few datasets with files.
# Apply md5sum on all the files and store checksums in a file.
#
# newdata: overwrite existing files if false.
# md5file: file where to store md5sums
# datasetname: base name for datasets
#
function _generate_data_common
{
typeset pool=$1
typeset newdata=$2
typeset md5file=$3
typeset datasetname=$4
typeset -i datasets=3
typeset -i files=5
typeset -i blocks=10
[[ -n $md5file ]] && rm -f $md5file
for i in {1..$datasets}; do
( $newdata ) && log_must zfs create "$pool/$datasetname$i"
for j in {1..$files}; do
typeset file="/$pool/$datasetname$i/file$j"
dd if=/dev/urandom of=$file bs=128k count=$blocks > /dev/null
[[ -n $md5file ]] && md5sum $file >> $md5file
done
( $newdata ) && sync_pool "$pool"
done
return 0
}
function generate_data
{
typeset pool=$1
typeset md5file="$2"
typeset datasetname=${3:-ds}
_generate_data_common $pool true "$md5file" $datasetname
}
function overwrite_data
{
typeset pool=$1
typeset md5file="$2"
typeset datasetname=${3:-ds}
_generate_data_common $1 false "$md5file" $datasetname
}
#
# Verify md5sums of every file in md5sum file $1.
#
function verify_data_md5sums
{
typeset md5file=$1
if [[ ! -f $md5file ]]; then
log_note "md5 sums file '$md5file' doesn't exist"
return 1
fi
md5sum -c --quiet $md5file
return $?
}
#
# Set devices size in DEVICE_DIR to $1.
#
function increase_device_sizes
{
typeset newfilesize=$1
typeset -i i=0
while (( i < $MAX_NUM )); do
log_must mkfile $newfilesize ${DEVICE_DIR}/${DEVICE_FILE}$i
((i += 1))
done
}
#
# Translate vdev names returned by zpool status into more generic names.
#
# eg: mirror-2 --> mirror
#
function _translate_vdev
{
typeset vdev=$1
typeset keywords="mirror replacing raidz1 raidz2 raidz3 indirect"
for word in $keywords; do
echo $vdev | egrep "^${word}-[0-9]+\$" > /dev/null
if [[ $? -eq 0 ]]; then
vdev=$word
break
fi
done
[[ $vdev == "logs" ]] && echo "log" && return 0
[[ $vdev == "raidz1" ]] && echo "raidz" && return 0
echo $vdev
return 0
}
#
# Check that pool configuration returned by zpool status matches expected
# configuration. Format for the check string is same as the vdev arguments for
# creating a pool
# Add -q for quiet mode.
#
# eg: check_pool_config pool1 "mirror c0t0d0s0 c0t1d0s0 log c1t1d0s0"
#
function check_pool_config
{
typeset logfailure=true
if [[ $1 == '-q' ]]; then
logfailure=false
shift
fi
typeset poolname=$1
typeset expected=$2
typeset status
status=$(zpool status $poolname 2>&1)
if [[ $? -ne 0 ]]; then
if ( $logfailure ); then
log_note "zpool status $poolname failed: $status"
fi
return 1
fi
typeset actual=""
typeset began=false
printf "$status\n" | while read line; do
typeset vdev=$(echo "$line" | awk '{printf $1}')
if ( ! $began ) && [[ $vdev == NAME ]]; then
began=true
continue
fi
( $began ) && [[ -z $vdev ]] && break;
if ( $began ); then
[[ -z $actual ]] && actual="$vdev" && continue
vdev=$(_translate_vdev $vdev)
actual="$actual $vdev"
fi
done
expected="$poolname $expected"
if [[ "$actual" != "$expected" ]]; then
if ( $logfailure ); then
log_note "expected pool vdevs:"
log_note "> '$expected'"
log_note "actual pool vdevs:"
log_note "> '$actual'"
fi
return 1
fi
return 0
}
#
# Check that pool configuration returned by zpool status matches expected
# configuration within a given timeout in seconds. See check_pool_config().
#
# eg: wait_for_pool_config pool1 "mirror c0t0d0s0 c0t1d0s0" 60
#
function wait_for_pool_config
{
typeset poolname=$1
typeset expectedconfig="$2"
typeset -i timeout=${3:-60}
timeout=$(( $timeout + $(date +%s) ))
while (( $(date +%s) < $timeout )); do
check_pool_config -q $poolname "$expectedconfig"
[[ $? -eq 0 ]] && return 0
sleep 3
done
check_pool_config $poolname "$expectedconfig"
return $?
}
#
# Check that pool status is ONLINE
#
function check_pool_healthy
{
typeset pool=$1
typeset status
status=$(zpool status $pool 2>&1)
if [[ $? -ne 0 ]]; then
log_note "zpool status $pool failed: $status"
return 1
fi
status=$(echo "$status" | grep "$pool" | grep -v "pool:" | \
awk '{print $2}')
if [[ $status != "ONLINE" ]]; then
log_note "Invalid zpool status for '$pool': '$status'" \
"!= 'ONLINE'"
return 1
fi
return 0
}
#
# Return 0 if a device is currently being replaced in the pool.
#
function pool_is_replacing
{
typeset pool=$1
zpool status $pool | grep "replacing" | grep "ONLINE" > /dev/null
return $?
}
function set_vdev_validate_skip
{
set_tunable32 "vdev_validate_skip" "$1"
}
function get_zfs_txg_timeout
{
get_tunable "zfs_txg_timeout"
}
function set_zfs_txg_timeout
{
set_tunable32 "zfs_txg_timeout" "$1"
}
function set_spa_load_verify_metadata
{
set_tunable32 "spa_load_verify_metadata" "$1"
}
function set_spa_load_verify_data
{
set_tunable32 "spa_load_verify_data" "$1"
}
function set_zfs_max_missing_tvds
{
set_tunable32 "zfs_max_missing_tvds" "$1"
}
#
# Use mdb to find the last txg that was synced in an active pool.
#
function get_last_txg_synced
{
typeset pool=$1
if is_linux; then
txg=$(tail "/proc/spl/kstat/zfs/$pool/txgs" |
awk '$3=="C" {print $1}' | tail -1)
[[ "$txg" ]] || txg=0
echo $txg
return 0
fi
typeset spas
spas=$(mdb -k -e "::spa")
[[ $? -ne 0 ]] && return 1
typeset spa=""
print "$spas\n" | while read line; do
typeset poolname=$(echo "$line" | awk '{print $3}')
typeset addr=$(echo "$line" | awk '{print $1}')
if [[ $poolname == $pool ]]; then
spa=$addr
break
fi
done
if [[ -z $spa ]]; then
log_fail "Couldn't find pool '$pool'"
return 1
fi
typeset mdbcmd="$spa::print spa_t spa_ubsync.ub_txg | ::eval '.=E'"
typeset -i txg
txg=$(mdb -k -e "$mdbcmd")
[[ $? -ne 0 ]] && return 1
echo $txg
return 0
}