Illumos #4101 finer-grained control of metaslab_debug

Today the metaslab_debug logic performs two tasks:

- load all metaslabs on import/open
- don't unload metaslabs at the end of spa_sync

This change provides knobs for each of these independently.

References:
  https://illumos.org/issues/4101
  https://github.com/illumos/illumos-gate/commit/0713e23

Notes:

1) This is a small piece of the metaslab improvement patch from
Illumos. It was worth bringing over before the rest, since it's
low risk and it can be useful on fragmented pools (e.g. Lustre
MDTs). metaslab_debug_unload would give the performance benefit
of the old metaslab_debug option without causing unwanted delay
during pool import.

Ported-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #2227
This commit is contained in:
George Wilson 2014-03-31 17:22:55 -07:00 committed by Brian Behlendorf
parent cc79a5c263
commit aa7d06a98a
2 changed files with 27 additions and 8 deletions

View File

@ -137,10 +137,21 @@ Default value: \fB8,388,608\fR.
.sp .sp
.ne 2 .ne 2
.na .na
\fBmetaslab_debug\fR (int) \fBmetaslab_debug_load\fR (int)
.ad .ad
.RS 12n .RS 12n
Keep space maps in core to verify frees Load all metaslabs during pool import.
.sp
Use \fB1\fR for yes and \fB0\fR for no (default).
.RE
.sp
.ne 2
.na
\fBmetaslab_debug_unload\fR (int)
.ad
.RS 12n
Prevent metaslabs from being unloaded.
.sp .sp
Use \fB1\fR for yes and \fB0\fR for no (default). Use \fB1\fR for yes and \fB0\fR for no (default).
.RE .RE

View File

@ -81,9 +81,14 @@ int zfs_mg_alloc_failures = 0;
int zfs_mg_noalloc_threshold = 0; int zfs_mg_noalloc_threshold = 0;
/* /*
* Metaslab debugging: when set, keeps all space maps in core to verify frees. * When set will load all metaslabs when pool is first opened.
*/ */
int metaslab_debug = 0; int metaslab_debug_load = 0;
/*
* When set will prevent metaslabs from being unloaded.
*/
int metaslab_debug_unload = 0;
/* /*
* Minimum size which forces the dynamic allocator to change * Minimum size which forces the dynamic allocator to change
@ -846,7 +851,7 @@ metaslab_init(metaslab_group_t *mg, space_map_obj_t *smo,
metaslab_group_add(mg, msp); metaslab_group_add(mg, msp);
if (metaslab_debug && smo->smo_object != 0) { if (metaslab_debug_load && smo->smo_object != 0) {
mutex_enter(&msp->ms_lock); mutex_enter(&msp->ms_lock);
VERIFY(space_map_load(msp->ms_map, mg->mg_class->mc_ops, VERIFY(space_map_load(msp->ms_map, mg->mg_class->mc_ops,
SM_FREE, smo, spa_meta_objset(vd->vdev_spa)) == 0); SM_FREE, smo, spa_meta_objset(vd->vdev_spa)) == 0);
@ -1407,7 +1412,7 @@ metaslab_sync_done(metaslab_t *msp, uint64_t txg)
if (msp->ms_allocmap[(txg + t) & TXG_MASK]->sm_space) if (msp->ms_allocmap[(txg + t) & TXG_MASK]->sm_space)
evictable = 0; evictable = 0;
if (evictable && !metaslab_debug) if (evictable && !metaslab_debug_unload)
space_map_unload(sm); space_map_unload(sm);
} }
@ -2109,6 +2114,9 @@ metaslab_check_free(spa_t *spa, const blkptr_t *bp)
} }
#if defined(_KERNEL) && defined(HAVE_SPL) #if defined(_KERNEL) && defined(HAVE_SPL)
module_param(metaslab_debug, int, 0644); module_param(metaslab_debug_load, int, 0644);
MODULE_PARM_DESC(metaslab_debug, "keep space maps in core to verify frees"); MODULE_PARM_DESC(metaslab_debug_load, "load all metaslabs during pool import");
module_param(metaslab_debug_unload, int, 0644);
MODULE_PARM_DESC(metaslab_debug_unload, "prevent metaslabs from being unloaded");
#endif /* _KERNEL && HAVE_SPL */ #endif /* _KERNEL && HAVE_SPL */